Triggered by Gerrit: https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142087 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-10876 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-6NGEUXic1B1V/agent.2123 SSH_AGENT_PID=2125 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_18274225804068225271.key (/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_18274225804068225271.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git init /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git refs/changes/87/142087/1 # timeout=30 > git rev-parse b0cd9821599b0cd4900dea0133f6ec3197af02d0^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision b0cd9821599b0cd4900dea0133f6ec3197af02d0 (refs/changes/87/142087/1) > git config core.sparsecheckout # timeout=10 > git checkout -f b0cd9821599b0cd4900dea0133f6ec3197af02d0 # timeout=30 Commit message: "CI: Add Github2Gerrit workflow" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 30cdcc1934dceee49d95346da5a57543a16b6c99 # timeout=10 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins49398461313463557.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-XozJ lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-XozJ/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. httplib2 0.31.0 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. Python 3.10.6 pip 25.3 from /tmp/venv-XozJ/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.3 aspy.yaml==1.3.0 attrs==25.4.0 autopage==0.5.2 beautifulsoup4==4.14.2 boto3==1.40.67 botocore==1.40.67 bs4==0.0.2 cachetools==6.2.1 certifi==2025.10.5 cffi==2.0.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.4 click==8.3.0 cliff==4.11.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.3.1 distlib==0.4.0 dnspython==2.8.0 docker==7.1.0 dogpile.cache==1.5.0 durationpy==0.10 email-validator==2.3.0 filelock==3.20.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.43.0 httplib2==0.31.0 identify==2.6.15 idna==3.11 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.1 jsonschema-specifications==2025.9.1 keystoneauth1==5.12.0 kubernetes==34.1.0 lftools==0.37.15 lxml==6.0.2 markdown-it-py==4.0.0 MarkupSafe==3.0.3 mdurl==0.1.2 msgpack==1.1.2 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.7.1 os-service-types==1.8.1 osc-lib==4.2.0 oslo.config==10.0.0 oslo.context==6.1.0 oslo.i18n==6.6.0 oslo.log==7.2.1 oslo.serialization==5.8.0 oslo.utils==9.1.0 packaging==25.0 pbr==7.0.3 platformdirs==4.5.0 prettytable==3.16.0 psutil==7.1.3 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 pygerrit2==2.0.15 PyGithub==2.8.1 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.6.0 pyparsing==2.4.7 pyperclip==1.11.0 pyrsistent==0.20.0 python-cinderclient==9.8.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.3 python-keystoneclient==5.7.0 python-magnumclient==4.9.0 python-openstackclient==8.2.0 python-swiftclient==4.8.0 PyYAML==6.0.3 referencing==0.37.0 requests==2.32.5 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.2.0 rich-argparse==1.7.2 rpds-py==0.28.0 rsa==4.9.1 ruamel.yaml==0.18.16 ruamel.yaml.clib==0.2.14 s3transfer==0.14.0 simplejson==3.20.2 six==1.17.0 smmap==5.0.2 soupsieve==2.8 stevedore==5.5.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.15.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.35.4 wcwidth==0.2.14 websocket-client==1.9.0 wrapt==2.0.0 xdg==6.0.0 xmltodict==1.0.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk11 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/sh /tmp/jenkins13050985855885626484.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config17023598806711192816tmp copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config10019659792549626431tmp [EnvInject] - Injecting environment variables from a build step. Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36 on prd-ubuntu1804-docker-8c-8g-10876 using settings config with name sdc-sdc-distribution-client-settings Replacing all maven server entries not found in credentials list is true using global settings config with name global-settings Replacing all maven server entries not found in credentials list is true [sdc-sdc-distribution-client-master-integration-pairwise] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -s /tmp/settings11937945376465590010.xml -gs /tmp/global-settings3322828861214574169.xml -DGERRIT_BRANCH=master -DGERRIT_PATCHSET_REVISION=b0cd9821599b0cd4900dea0133f6ec3197af02d0 -DGERRIT_HOST=gerrit.onap.org -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -DGERRIT_CHANGE_OWNER_EMAIL=ksandi@contractor.linuxfoundation.org "-DGERRIT_EVENT_ACCOUNT_NAME=Kevin Sandi" -DGERRIT_CHANGE_URL=https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142087 -DGERRIT_PATCHSET_UPLOADER_EMAIL=ksandi@contractor.linuxfoundation.org "-DARCHIVE_ARTIFACTS= **/target/surefire-reports/*-output.txt" -DGERRIT_EVENT_TYPE=comment-added -DSTACK_NAME=$JOB_NAME-$BUILD_NUMBER -DGERRIT_PROJECT=sdc/sdc-distribution-client -DGERRIT_CHANGE_NUMBER=142087 -DGERRIT_SCHEME=ssh '-DGERRIT_PATCHSET_UPLOADER=\"Kevin Sandi\" ' -DGERRIT_PORT=29418 -DGERRIT_CHANGE_PRIVATE_STATE=false -DGERRIT_REFSPEC=refs/changes/87/142087/1 "-DGERRIT_PATCHSET_UPLOADER_NAME=Kevin Sandi" -DGERRIT_EVENT_UPDATED_APPROVALS={} '-DGERRIT_CHANGE_OWNER=\"Kevin Sandi\" ' -DPROJECT=sdc/sdc-distribution-client -DGERRIT_HASHTAGS= -DGERRIT_CHANGE_COMMIT_MESSAGE=Q0k6IEFkZCBHaXRodWIyR2Vycml0IHdvcmtmbG93CgpJc3N1ZS1JRDogQ0lNQU4tMzMKQ2hhbmdlLUlkOiBJMzk2MjU0NTEwMjY0ZjZhOWEzYWNjMzZjZmZjMGEzZTRlNmU5NTRlMApTaWduZWQtb2ZmLWJ5OiBLZXZpbiBTYW5kaSA8a3NhbmRpQGNvbnRyYWN0b3IubGludXhmb3VuZGF0aW9uLm9yZz4K -DGERRIT_NAME=Primary -DGERRIT_TOPIC= "-DGERRIT_CHANGE_SUBJECT=CI: Add Github2Gerrit workflow" '-DGERRIT_EVENT_ACCOUNT=\"Kevin Sandi\" ' -DGERRIT_CHANGE_WIP_STATE=false -DGERRIT_CHANGE_ID=I396254510264f6a9a3acc36cffc0a3e4e6e954e0 -DGERRIT_EVENT_HASH=777907682 -DGERRIT_VERSION=3.7.2 -DGERRIT_EVENT_COMMENT_TEXT=UGF0Y2ggU2V0IDE6CgpyZWNoZWNr -DGERRIT_EVENT_ACCOUNT_EMAIL=ksandi@contractor.linuxfoundation.org -DGERRIT_PATCHSET_NUMBER=1 "-DMAVEN_PARAMS= -P integration-pairwise" "-DGERRIT_CHANGE_OWNER_NAME=Kevin Sandi" -DMAVEN_OPTS='' clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -P integration-pairwise [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Reactor Build Order: [INFO] [INFO] sdc-sdc-distribution-client [pom] [INFO] sdc-distribution-client [jar] [INFO] sdc-distribution-ci [jar] [INFO] [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- [INFO] Building sdc-sdc-distribution-client 2.1.2-SNAPSHOT [1/3] [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- [INFO] No tests to run. [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.1.2-SNAPSHOT/sdc-main-distribution-client-2.1.2-SNAPSHOT.pom [INFO] [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- [INFO] Building sdc-distribution-client 2.1.2-SNAPSHOT [2/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 61 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 10 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.343 s - in org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Running org.onap.sdc.http.HttpSdcClientTest 17:18:32.734 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 17:18:33.493 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 17:18:33.494 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.59 s - in org.onap.sdc.http.HttpSdcClientTest [INFO] Running org.onap.sdc.http.HttpClientFactoryTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.366 s - in org.onap.sdc.http.HttpClientFactoryTest [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 s - in org.onap.sdc.http.HttpRequestFactoryTest [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 17:18:34.261 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= ea0d9cdf-563d-4646-9351-8892abd79590 url= /sdc/v1/artifactTypes 17:18:34.263 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1673091632 17:18:34.270 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:18:34.271 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 17:18:34.273 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 17:18:34.289 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 1547adfa-f686-483b-87fa-da96e16e63bb url= /sdc/v1/artifactTypes 17:18:34.292 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:18:34.375 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 17:18:34.380 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 20a59782-11e6-4ef6-a926-e42c32f66c5e url= /sdc/v1/artifactTypes 17:18:34.381 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1780948011 17:18:34.381 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:18:34.382 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 17:18:34.385 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 27bc2220-7a24-4033-9ad3-01f2a8a3597a url= /sdc/v1/distributionKafkaData 17:18:34.385 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1026432280 17:18:34.385 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 17:18:34.386 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 17:18:34.393 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 159967977 17:18:34.393 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 17:18:34.394 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:18:34.416 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= e358247d-baec-4291-948c-43335dc426e4 url= /sdc/v1/artifactTypes 17:18:34.423 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 92f3f9b2-624c-4606-8262-11bc9069342d url= /sdc/v1/distributionKafkaData [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.518 s - in org.onap.sdc.http.SdcConnectorClientTest [INFO] Running org.onap.sdc.utils.SdcKafkaTest 17:18:34.451 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 17:18:34.637 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:46233 17:18:34.637 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 17:18:34.637 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 17:18:34.637 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 17:18:34.639 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 17:18:34.669 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@4393734d 17:18:34.673 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit4057318268231248889 snapDir:/tmp/kafka-unit4057318268231248889 17:18:34.673 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 17:18:34.682 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-docker-8c-8g-10876 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 17:18:34.683 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-192-generic 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=443MB 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=8042MB 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=504MB 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 17:18:34.684 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 17:18:34.686 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 17:18:34.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 17:18:34.687 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 17:18:34.688 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 17:18:34.688 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 17:18:34.690 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 17:18:34.692 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 17:18:34.692 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 17:18:34.692 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit4057318268231248889/version-2 snapdir /tmp/kafka-unit4057318268231248889/version-2 17:18:34.721 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 17:18:34.732 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 17:18:34.804 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 17:18:34.810 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 17:18:34.812 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 17:18:34.819 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:46233 17:18:34.844 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 17:18:34.844 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 17:18:34.844 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 17:18:34.844 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 17:18:34.852 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 17:18:34.852 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit4057318268231248889/version-2/snapshot.0 17:18:34.915 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 70 ms, highest zxid is 0x0, digest is 1371985504 17:18:34.915 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit4057318268231248889/version-2/snapshot.0 17:18:34.915 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 1 ms 17:18:34.931 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 17:18:34.931 [ProcessThread(sid:0 cport:46233):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 17:18:34.954 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 17:18:34.958 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 17:18:36.517 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:38099 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:38099 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit7122242531084360278 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:46233 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 17:18:36.577 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 17:18:36.705 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 17:18:36.710 [main] INFO kafka.server.KafkaServer - starting 17:18:36.710 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:46233 17:18:36.710 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 17:18:36.730 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:46233. 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-docker-8c-8g-10876 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 17:18:36.737 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-192-generic 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=526MB 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=8042MB 17:18:36.738 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=632MB 17:18:36.741 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:46233 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@2cee7445 17:18:36.746 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 17:18:36.756 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 17:18:36.760 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:18:36.760 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 17:18:36.765 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 17:18:36.766 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 17:18:36.767 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.Login - Client successfully logged in. 17:18:36.770 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 17:18:36.796 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:46233. 17:18:36.796 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 17:18:36.802 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:37820, server: localhost/127.0.0.1:46233 17:18:36.802 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:46233] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:37820 17:18:36.806 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:46233 17:18:36.836 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:37820 client's lastZxid is 0x0 17:18:36.839 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000002daa60000 17:18:36.839 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000002daa60000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:37820 17:18:36.844 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 17:18:36.844 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 17:18:36.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 17:18:36.857 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 17:18:36.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 17:18:36.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000002daa60000 with negotiated timeout 30000 for client /127.0.0.1:37820 17:18:36.870 [main-SendThread(127.0.0.1:46233)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:46233, session id = 0x1000002daa60000, negotiated timeout = 30000 17:18:36.877 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 17:18:36.878 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 17:18:36.879 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 17:18:36.883 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 17:18:36.884 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 17:18:36.884 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 17:18:36.888 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 17:18:36.890 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 17:18:36.891 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 17:18:36.891 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 17:18:36.892 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 17:18:36.893 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 17:18:36.929 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:18:36.934 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 17:18:36.935 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 17:18:36.935 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 17:18:36.936 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:18:36.938 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 17:18:36.939 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 17:18:36.940 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 17:18:36.942 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:36.942 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:36.944 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:36.945 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:36.945 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:36.952 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 17:18:36.952 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 17:18:36.953 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 17:18:36.956 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 17:18:36.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 3251571472 17:18:36.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 17:18:36.961 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 17:18:36.979 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:36.979 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:36.981 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 17:18:36.981 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:36.983 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 17:18:36.985 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:36.985 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:36.986 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:36.986 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:36.986 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:36.986 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3251571472 17:18:36.986 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 3369532698 17:18:36.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 17:18:36.988 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:36.988 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 3685622712 17:18:36.988 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 17:18:36.991 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 17:18:36.993 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:36.993 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:36.993 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:36.993 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:36.994 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:36.994 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3685622712 17:18:36.994 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 4706592030 17:18:36.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 17:18:36.995 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:36.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 8786054098 17:18:36.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 17:18:36.996 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 17:18:36.998 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:36.999 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:36.999 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:36.999 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:36.999 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:36.999 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8786054098 17:18:37.000 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 10269319032 17:18:37.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 17:18:37.001 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:37.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 10678959970 17:18:37.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 17:18:37.002 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 17:18:37.004 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.004 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 17:18:37.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:37.006 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 17:18:37.007 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 10678959970 17:18:37.008 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 12212034846 17:18:37.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 17:18:37.009 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 14125552467 17:18:37.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 17:18:37.010 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14125552467 17:18:37.012 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13007590861 17:18:37.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 17:18:37.014 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 14378953686 17:18:37.015 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 17:18:37.015 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 17:18:37.017 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.017 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 17:18:37.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:37.020 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14378953686 17:18:37.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13790581918 17:18:37.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 17:18:37.024 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 17:18:37.024 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 16330952388 17:18:37.024 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 17:18:37.025 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 17:18:37.026 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.026 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.026 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.026 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.027 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.027 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 16330952388 17:18:37.027 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 15918030618 17:18:37.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 17:18:37.038 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 17:18:37.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 16398659091 17:18:37.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 17:18:37.039 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 17:18:37.041 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.041 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.041 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.041 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.041 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.042 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 16398659091 17:18:37.042 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 17660175002 17:18:37.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 17:18:37.043 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:37.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 18351133709 17:18:37.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 17:18:37.044 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 17:18:37.046 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.046 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.047 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.047 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.047 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.047 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18351133709 17:18:37.047 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 19138448434 17:18:37.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 17:18:37.048 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 17:18:37.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 23094540400 17:18:37.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 17:18:37.049 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 17:18:37.050 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 23094540400 17:18:37.051 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22474416447 17:18:37.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 17:18:37.052 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 17:18:37.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 23661429262 17:18:37.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 17:18:37.053 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 17:18:37.054 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.054 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.055 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.055 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.055 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.055 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 23661429262 17:18:37.055 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22838969223 17:18:37.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 17:18:37.057 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 17:18:37.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 24369415271 17:18:37.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 17:18:37.057 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24369415271 17:18:37.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24315965786 17:18:37.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 17:18:37.060 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 24888123931 17:18:37.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 17:18:37.061 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.063 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24888123931 17:18:37.064 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 27028307904 17:18:37.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 17:18:37.065 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 27976645664 17:18:37.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 17:18:37.066 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 17:18:37.067 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.067 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.067 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.067 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.067 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.068 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27976645664 17:18:37.068 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 26542394073 17:18:37.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 17:18:37.071 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 29962990616 17:18:37.071 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 17:18:37.072 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 17:18:37.073 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.073 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.074 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.074 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.074 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.074 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 29962990616 17:18:37.074 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 31734858866 17:18:37.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 17:18:37.080 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 34285809790 17:18:37.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 17:18:37.080 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 17:18:37.081 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 34285809790 17:18:37.082 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 33260677039 17:18:37.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 17:18:37.090 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:37.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 37085253832 17:18:37.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 17:18:37.091 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 17:18:37.105 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.106 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 17:18:37.108 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 17:18:37.109 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 17:18:37.398 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.398 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 17:18:37.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:37.400 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a224e6437496b70625a516f365f34346752444b59536b41227d,v{s{31,s{'world,'anyone}}},0 response:: 17:18:37.403 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.403 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.404 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.405 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.405 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.405 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 37085253832 17:18:37.405 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 36762397666 17:18:37.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 17:18:37.407 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 17:18:37.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 39766130810 17:18:37.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 17:18:37.409 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 17:18:37.411 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.411 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:37.412 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:37.412 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.412 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.413 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 39766130810 17:18:37.413 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 39011284382 17:18:37.414 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 17:18:37.415 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 17:18:37.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 41815884497 17:18:37.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 17:18:37.415 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a224e6437496b70625a516f365f34346752444b59536b41227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 17:18:37.416 [main] INFO kafka.server.KafkaServer - Cluster ID = Nd7IkpbZQo6_44gRDKYSkA 17:18:37.422 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit7122242531084360278/meta.properties 17:18:37.431 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 17:18:37.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 17:18:37.432 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 17:18:37.488 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.488 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 17:18:37.488 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 17:18:37.489 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 17:18:37.494 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:38099 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:38099 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit7122242531084360278 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:46233 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 17:18:37.499 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:18:37.548 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 17:18:37.548 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 17:18:37.550 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 17:18:37.552 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 17:18:37.590 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.590 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:37.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:37.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:37.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:37.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:37.593 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1762449516998,1762449516998,0,0,0,0,0,0,6} 17:18:37.597 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit7122242531084360278) 17:18:37.601 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit7122242531084360278 since no clean shutdown file was found 17:18:37.606 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 17:18:37.611 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 17:18:37.613 [main] INFO kafka.log.LogManager - Loaded 0 logs in 16ms. 17:18:37.614 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 17:18:37.615 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 17:18:37.616 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 17:18:37.616 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 17:18:37.617 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 17:18:37.617 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 17:18:37.618 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 17:18:37.634 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 17:18:37.683 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 17:18:37.708 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 17:18:37.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:37.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:37.716 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 17:18:37.722 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 17:18:37.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:37.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:37.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:37.724 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 17:18:37.725 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 17:18:37.746 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:18:37.776 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 17:18:37.777 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:37.778 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:37.890 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:37.891 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:37.991 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:37.992 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.092 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.093 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.193 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.193 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.294 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.294 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.327 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 17:18:38.331 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:38099. 17:18:38.363 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 17:18:38.371 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 17:18:38.371 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.371 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.395 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.395 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.403 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 17:18:38.406 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 17:18:38.408 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 17:18:38.411 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 17:18:38.428 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 17:18:38.428 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 17:18:38.431 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 17:18:38.433 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.433 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:18:38.433 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:18:38.433 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.433 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.433 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.434 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1762449516993,1762449516993,0,0,0,0,0,0,5} 17:18:38.470 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 17:18:38.472 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.473 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.483 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.483 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:38.484 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:38.484 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.484 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.484 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41815884497 17:18:38.484 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42005754986 17:18:38.486 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.486 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:18:38.486 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.486 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.488 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 45823864297 17:18:38.489 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42797556973 17:18:38.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 17:18:38.493 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:38.493 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:38.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 42797556973 17:18:38.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 17:18:38.494 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@6a1d41de response:: org.apache.zookeeper.MultiResponse@1dbbce85 17:18:38.497 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.497 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.497 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1762449518483,1762449518483,1,0,0,72057606296174592,209,0,25 17:18:38.498 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:38099, czxid (broker epoch): 25 17:18:38.573 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.574 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.587 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 17:18:38.597 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.598 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.600 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 17:18:38.605 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 17:18:38.606 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 17:18:38.609 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.610 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 17:18:38.611 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.612 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 17:18:38.614 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 17:18:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 17:18:38.615 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 17:18:38.616 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 42797556973 17:18:38.617 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46546014274 17:18:38.633 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 17:18:38.635 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 17:18:38.654 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 17:18:38.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 48459425866 17:18:38.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 17:18:38.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:38.654 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:38.655 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 17:18:38.656 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 38,4 replyHeader:: 38,26,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 17:18:38.657 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 17:18:38.658 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 17:18:38.663 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:18:38.663 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.663 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:38.663 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:38.663 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.663 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48459425866 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 48244060027 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 51064521580 17:18:38.664 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 52789743983 17:18:38.665 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 17:18:38.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x27 zxid:0x1b txntype:14 reqpath:n/a 17:18:38.666 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 17:18:38.666 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 17:18:38.669 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 17:18:38.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 52789743983 17:18:38.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x27 zxid:0x1b txntype:14 reqpath:n/a 17:18:38.669 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:18:38.670 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000002daa60000 17:18:38.670 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 17:18:38.670 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 39,14 replyHeader:: 39,27,0 request:: org.apache.zookeeper.MultiOperationRecord@a7141cf3 response:: org.apache.zookeeper.MultiResponse@f3584fa6 17:18:38.671 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 17:18:38.672 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.673 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,4 replyHeader:: 40,27,-101 request:: '/feature,T response:: 17:18:38.674 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.674 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.677 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 17:18:38.678 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.678 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:38.678 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:38.678 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.678 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.679 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 52789743983 17:18:38.679 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 49977229753 17:18:38.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x29 zxid:0x1c txntype:1 reqpath:n/a 17:18:38.679 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 17:18:38.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 51961459136 17:18:38.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x29 zxid:0x1c txntype:1 reqpath:n/a 17:18:38.680 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:18:38.680 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000002daa60000 17:18:38.680 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 17:18:38.680 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 41,1 replyHeader:: 41,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 17:18:38.681 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 17:18:38.681 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 17:18:38.681 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.681 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.682 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1762449518678,1762449518678,0,0,0,0,38,0,28} 17:18:38.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 17:18:38.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.695 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 43,4 replyHeader:: 43,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1762449518678,1762449518678,0,0,0,0,38,0,28} 17:18:38.698 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.698 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.708 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 17:18:38.708 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:18:38.708 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 17:18:38.710 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 17:18:38.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 17:18:38.710 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 17:18:38.711 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 17:18:38.712 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 17:18:38.713 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 17:18:38.722 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 17:18:38.722 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 17:18:38.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.724 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.724 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.725 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 17:18:38.726 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:18:38.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:18:38.727 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 17:18:38.728 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 17:18:38.729 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 17:18:38.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 17:18:38.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.730 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1762449517054,1762449517054,0,0,0,0,0,0,16} 17:18:38.732 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 17:18:38.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 17:18:38.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 17:18:38.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.734 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1762449517046,1762449517046,0,0,0,0,0,0,14} 17:18:38.735 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 17:18:38.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:18:38.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 17:18:38.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.737 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1762449516993,1762449516993,0,1,0,0,0,1,25} 17:18:38.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:18:38.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:18:38.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.739 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3338303939225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373632343439353138343434227d,s{25,25,1762449518483,1762449518483,1,0,0,72057606296174592,209,0,25} 17:18:38.755 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 17:18:38.756 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:38.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:38.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.757 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1762449516998,1762449516998,0,0,0,0,0,0,6} 17:18:38.760 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 17:18:38.762 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:18:38.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 17:18:38.762 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1762449518483,1762449518483,1,0,0,72057606296174592,209,0,25} 17:18:38.766 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 17:18:38.774 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 17:18:38.775 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.775 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.779 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 17:18:38.781 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 17:18:38.782 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 17:18:38.782 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 17:18:38.782 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 17:18:38.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 17:18:38.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 17:18:38.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.784 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1762449517026,1762449517026,0,0,0,0,0,0,12} 17:18:38.785 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 17:18:38.786 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 17:18:38.786 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 17:18:38.786 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 17:18:38.787 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 17:18:38.790 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 17:18:38.797 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 17:18:38.797 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 17:18:38.799 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.799 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.802 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 17:18:38.802 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 17:18:38.802 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 17:18:38.802 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 17:18:38.806 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 17:18:38.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 17:18:38.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 17:18:38.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 17:18:38.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 17:18:38.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.808 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 54,12 replyHeader:: 54,28,0 request:: '/config/topics,F response:: v{},s{17,17,1762449517059,1762449517059,0,0,0,0,0,0,17} 17:18:38.808 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 55,12 replyHeader:: 55,28,0 request:: '/config/changes,T response:: v{},s{9,9,1762449517011,1762449517011,0,0,0,0,0,0,9} 17:18:38.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 17:18:38.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 17:18:38.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.811 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 56,12 replyHeader:: 56,28,0 request:: '/config/clients,F response:: v{},s{18,18,1762449517063,1762449517063,0,0,0,0,0,0,18} 17:18:38.811 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:38.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:18:38.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:18:38.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.812 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 17:18:38.812 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:38.812 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 57,12 replyHeader:: 57,28,0 request:: '/config/users,F response:: v{},s{19,19,1762449517067,1762449517067,0,0,0,0,0,0,19} 17:18:38.813 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 17:18:38.814 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 17:18:38.814 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.814 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 58,3 replyHeader:: 58,28,-101 request:: '/admin/reassign_partitions,T response:: 17:18:38.815 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 59,12 replyHeader:: 59,28,0 request:: '/config/users,F response:: v{},s{19,19,1762449517067,1762449517067,0,0,0,0,0,0,19} 17:18:38.817 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 17:18:38.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 17:18:38.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.817 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 60,12 replyHeader:: 60,28,0 request:: '/config/ips,F response:: v{},s{21,21,1762449517081,1762449517081,0,0,0,0,0,0,21} 17:18:38.818 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 17:18:38.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 17:18:38.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.818 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.819 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 61,12 replyHeader:: 61,28,0 request:: '/config/brokers,F response:: v{},s{20,20,1762449517073,1762449517073,0,0,0,0,0,0,20} 17:18:38.819 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 17:18:38.820 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.820 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 62,4 replyHeader:: 62,28,-101 request:: '/admin/preferred_replica_election,T response:: 17:18:38.821 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 17:18:38.821 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 17:18:38.821 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 17:18:38.822 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 17:18:38.822 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 17:18:38.823 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 17:18:38.829 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 17:18:38.831 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.831 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.831 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.831 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.832 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:38.832 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51961459136 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.833 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:18:38.833 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:18:38.833 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449518830 17:18:38.833 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51961459136 17:18:38.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 17:18:38.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:38.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 17:18:38.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 51961459136 17:18:38.835 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 17:18:38.835 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 17:18:38.837 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 63,14 replyHeader:: 63,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 17:18:38.837 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:18:38.840 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37636 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:38.841 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 17:18:38.841 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 17:18:38.842 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 17:18:38.850 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.851 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.851 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.852 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 64,3 replyHeader:: 64,29,0 request:: '/controller,T response:: s{27,27,1762449518663,1762449518663,0,0,0,72057606296174592,54,0,27} 17:18:38.853 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 17:18:38.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:38.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:38.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:38.854 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 65,4 replyHeader:: 65,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373632343439353138363133227d,s{27,27,1762449518663,1762449518663,0,0,0,72057606296174592,54,0,27} 17:18:38.857 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:38099] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 17:18:38.860 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:38.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 17:18:38.861 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 66,3 replyHeader:: 66,29,-101 request:: '/admin/preferred_replica_election,T response:: 17:18:38.864 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37636 17:18:38.875 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.875 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.883 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:38.883 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 17:18:38.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:38.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:38.897 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:38099 (id: -1 rack: null)], partitions = [], controller = null). 17:18:38.898 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:18:38.899 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:38.899 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 17:18:38.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:18:38.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:18:38.904 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449518904 17:18:38.904 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 17:18:38.904 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 17:18:38.906 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1762449578905, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 17:18:38.909 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:38.910 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38099 (id: -1 rack: null) using address localhost/127.0.0.1 17:18:38.911 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37638 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:38.912 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37638 17:18:38.912 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:38.913 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:38.918 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:18:38.919 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:38.919 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:38.919 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:38.921 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 17:18:38.934 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:38.934 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:38.938 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:38.938 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:38.939 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:38.939 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:38.939 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:38.939 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:38.939 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:38.940 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 17:18:38.941 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:38.941 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:18:38.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:38.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:38.945 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 17:18:38.945 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:38.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:38.946 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 17:18:38.946 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 17:18:38.946 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 17:18:38.947 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:18:38.947 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:18:38.947 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:18:38.947 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:38099 (id: 1 rack: null) for sending state change requests 17:18:38.947 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 17:18:38.947 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:38.948 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38099, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:18:38.976 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:38.976 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 17:18:38.980 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 17:18:38.980 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:38.984 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:38.985 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:38099 (id: -1 rack: null). correlationId=1, timeoutMs=14920 17:18:38.985 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14920 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.000 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 17:18:39.000 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:38099 (id: 1 rack: null) 17:18:39.004 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":38099,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37636-0","totalTimeMs":30.105,"requestQueueTimeMs":16.111,"localTimeMs":13.456,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.159,"sendTimeMs":0.378,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.005 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37638-0","totalTimeMs":29.501,"requestQueueTimeMs":15.493,"localTimeMs":7.311,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":1.024,"sendTimeMs":5.67,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.019 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 17:18:39.020 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[]},"connection":"127.0.0.1:38099-127.0.0.1:37638-0","totalTimeMs":13.04,"requestQueueTimeMs":0.963,"localTimeMs":11.514,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.152,"sendTimeMs":0.409,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.023 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = Nd7IkpbZQo6_44gRDKYSkA, nodes = [localhost:38099 (id: 1 rack: null)], partitions = [], controller = localhost:38099 (id: 1 rack: null)) 17:18:39.024 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:39.024 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:39.024 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:39.024 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:39.024 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37640 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:39.025 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37640 17:18:39.026 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:18:39.027 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:39.027 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:39.027 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:39.027 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 17:18:39.028 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:39.028 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:39.029 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:39.029 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:39.029 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:39.029 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:18:39.029 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:18:39.029 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:39.030 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:39.030 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:39.030 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:39.030 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:18:39.030 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:18:39.030 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:18:39.030 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 17:18:39.030 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:39.032 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:39.033 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:39.033 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:38099 (id: 1 rack: null). correlationId=3, timeoutMs=14986 17:18:39.034 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14986 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 17:18:39.034 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37640-1","totalTimeMs":1.777,"requestQueueTimeMs":0.398,"localTimeMs":0.993,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.225,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.040 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=38099, rack=null)], clusterAuthorizedOperations=-2147483648) 17:18:39.041 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 17:18:39.041 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 17:18:39.041 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":38099,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:38099-127.0.0.1:37640-1","totalTimeMs":5.846,"requestQueueTimeMs":0.871,"localTimeMs":4.624,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.112,"sendTimeMs":0.237,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.041 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 17:18:39.043 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38099-127.0.0.1:37640-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:39.043 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38099-127.0.0.1:37638-0) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:39.045 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:18:39.045 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:18:39.045 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:18:39.045 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 17:18:39.045 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 17:18:39.045 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 17:18:39.045 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:38099 17:18:39.045 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:38099] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 17:18:39.046 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:38099 (id: -1 rack: null)], partitions = [], controller = null). 17:18:39.046 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:18:39.052 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:18:39.052 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:18:39.052 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449519052 17:18:39.052 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 17:18:39.054 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 17:18:39.054 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:39.054 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38099 (id: -1 rack: null) using address localhost/127.0.0.1 17:18:39.055 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:39.055 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37642 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:39.055 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37642 17:18:39.055 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:39.057 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1762449579055, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 17:18:39.058 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:18:39.058 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:39.058 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:39.059 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:39.059 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:39.059 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 17:18:39.059 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:39.059 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:39.060 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:39.060 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:39.060 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:39.060 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:18:39.061 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:39.061 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:39.061 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 17:18:39.061 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:39.065 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37642-1","totalTimeMs":2.291,"requestQueueTimeMs":0.219,"localTimeMs":1.793,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.071,"sendTimeMs":0.206,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.066 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:39.067 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:39.067 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:38099 (id: -1 rack: null). correlationId=1, timeoutMs=14987 17:18:39.067 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14987 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.069 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[]},"connection":"127.0.0.1:38099-127.0.0.1:37642-1","totalTimeMs":0.808,"requestQueueTimeMs":0.121,"localTimeMs":0.485,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.069,"sendTimeMs":0.131,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 17:18:39.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = Nd7IkpbZQo6_44gRDKYSkA, nodes = [localhost:38099 (id: 1 rack: null)], partitions = [], controller = localhost:38099 (id: 1 rack: null)) 17:18:39.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:39.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:39.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:39.070 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:39.070 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37644 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:39.070 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37644 17:18:39.071 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:18:39.071 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:39.071 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 17:18:39.071 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:39.071 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:39.072 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:39.072 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:39.072 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:39.072 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:39.072 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:39.073 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 17:18:39.073 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 17:18:39.073 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:39.073 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:39.073 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:39.074 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:39.074 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 17:18:39.074 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 17:18:39.074 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 17:18:39.074 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 17:18:39.074 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:39.076 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 17:18:39.076 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:38099 (id: 1 rack: null) 17:18:39.076 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:39.077 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:39.077 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14992, validateOnly=false) to localhost:38099 (id: 1 rack: null). correlationId=3, timeoutMs=14992 17:18:39.078 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37644-2","totalTimeMs":1.912,"requestQueueTimeMs":0.203,"localTimeMs":1.32,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.096,"sendTimeMs":0.292,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.078 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14992 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14992, validateOnly=false) 17:18:39.120 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 17:18:39.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 17:18:39.121 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 17:18:39.125 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.125 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.125 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.125 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 17:18:39.148 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 17:18:39.150 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 17:18:39.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:39.152 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 17:18:39.153 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.153 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.154 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.154 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.154 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.154 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51961459136 17:18:39.154 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53937896441 17:18:39.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 17:18:39.155 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:39.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 54870394508 17:18:39.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 17:18:39.156 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 17:18:39.163 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.163 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54870394508 17:18:39.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 57502050423 17:18:39.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 17:18:39.165 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 60535692593 17:18:39.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 17:18:39.166 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:18:39.166 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002daa60000 17:18:39.166 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 17:18:39.166 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22377472744e35317452373663574931395133496f5451222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 17:18:39.167 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.167 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.167 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.168 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 17:18:39.168 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1762449516998,1762449516998,0,1,0,0,0,1,32} 17:18:39.169 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.169 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.169 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.170 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,F response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22377472744e35317452373663574931395133496f5451222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1762449519163,1762449519163,0,0,0,0,116,0,32} 17:18:39.171 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 17:18:39.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.171 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22377472744e35317452373663574931395133496f5451222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1762449519163,1762449519163,0,0,0,0,116,0,32} 17:18:39.178 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(7trtN51tR76cWI19Q3IoTQ),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 17:18:39.179 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 17:18:39.181 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.181 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.185 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60535692593 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.192 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.193 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.193 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60535692593 17:18:39.193 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61423166103 17:18:39.193 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62884638691 17:18:39.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 17:18:39.194 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 62884638691 17:18:39.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 17:18:39.195 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62884638691 17:18:39.197 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62884638691 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 64928271936 17:18:39.198 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 67514057954 17:18:39.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 17:18:39.200 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 67514057954 17:18:39.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 17:18:39.201 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 67514057954 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.205 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.206 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.206 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.206 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 67514057954 17:18:39.206 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68274882168 17:18:39.206 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70095460870 17:18:39.207 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 17:18:39.207 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.207 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 70095460870 17:18:39.207 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 17:18:39.208 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 17:18:39.215 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.216 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 17:18:39.218 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 17:18:39.219 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.220 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=38099)]) 17:18:39.229 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 17:18:39.261 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 17:18:39.262 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 17:18:39.275 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 17:18:39.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 17:18:39.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.276 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1762449519153,1762449519153,0,0,0,0,25,0,31} 17:18:39.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.323 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.324 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.324 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.329 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.342 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.344 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.346 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit7122242531084360278/my-test-topic-0 with properties {} 17:18:39.347 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 17:18:39.348 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 17:18:39.349 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(7trtN51tR76cWI19Q3IoTQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.359 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.509 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 17:18:39.513 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 286ms correlationId 1 from controller 1 for 1 partitions 17:18:39.518 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=7trtN51tR76cWI19Q3IoTQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 17:18:39.518 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":38099}]},"response":{"errorCode":0,"topics":[{"topicId":"7trtN51tR76cWI19Q3IoTQ","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37636-0","totalTimeMs":296.079,"requestQueueTimeMs":3.732,"localTimeMs":291.779,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.177,"sendTimeMs":0.389,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.519 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38099, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:18:39.529 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 17:18:39.537 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 17:18:39.537 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 17:18:39.538 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 17:18:39.538 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14992,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37644-2","totalTimeMs":457.987,"requestQueueTimeMs":1.992,"localTimeMs":103.914,"remoteTimeMs":351.684,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.304,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.539 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 17:18:39.539 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":38099,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37636-0","totalTimeMs":18.571,"requestQueueTimeMs":4.221,"localTimeMs":13.103,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":1.03,"sendTimeMs":0.214,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.540 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 17:18:39.540 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 17:18:39.541 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 17:18:39.541 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38099-127.0.0.1:37644-2) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:39.541 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38099-127.0.0.1:37642-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:39.543 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:18:39.543 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:18:39.543 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:18:39.543 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 17:18:39.543 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 17:18:39.562 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [SASL_PLAINTEXT://localhost:38099] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = mso-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 600000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 50000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:18:39.563 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initializing the Kafka consumer 17:18:39.573 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:18:39.615 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:18:39.615 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:18:39.615 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449519615 17:18:39.615 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Kafka consumer initialized 17:18:39.616 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Subscribed to topic(s): my-test-topic 17:18:39.616 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: -1 rack: null) 17:18:39.619 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:39.619 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: -1 rack: null) using address localhost/127.0.0.1 17:18:39.620 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:39.620 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:39.620 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37646 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:39.620 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37646 17:18:39.621 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:18:39.621 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:39.621 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed connection to node -1. Fetching API versions. 17:18:39.621 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:39.621 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:39.622 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:39.622 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:39.623 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:39.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:39.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:39.623 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INITIAL 17:18:39.623 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:18:39.624 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:39.625 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:39.625 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:39.625 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to COMPLETE 17:18:39.625 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:18:39.625 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:39.625 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:18:39.625 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating API versions fetch from node -1. 17:18:39.625 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:39.628 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:39.629 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:39.629 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37646-2","totalTimeMs":2.2,"requestQueueTimeMs":0.41,"localTimeMs":1.436,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.238,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.629 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: -1 rack: null) 17:18:39.630 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.630 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:39.643 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37646-2","totalTimeMs":12.76,"requestQueueTimeMs":3.702,"localTimeMs":8.728,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.141,"sendTimeMs":0.187,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.644 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:39.647 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to 7trtN51tR76cWI19Q3IoTQ 17:18:39.650 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.650 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Cluster ID: Nd7IkpbZQo6_44gRDKYSkA 17:18:39.650 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:39.650 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:39.651 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.652 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 17:18:39.653 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.653 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.653 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1762449516998,1762449516998,0,1,0,0,0,1,32} 17:18:39.659 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 17:18:39.659 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.660 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 17:18:39.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 17:18:39.661 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70095460870 17:18:39.662 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 69867460094 17:18:39.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 17:18:39.663 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 17:18:39.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 70538063034 17:18:39.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 17:18:39.664 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70538063034 17:18:39.670 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70656032448 17:18:39.671 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 17:18:39.671 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.671 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 72503231612 17:18:39.671 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 17:18:39.672 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:18:39.672 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002daa60000 17:18:39.672 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 17:18:39.672 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22394c62574667614152355349383471484c305f563167222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 17:18:39.673 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 17:18:39.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.673 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 17:18:39.673 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1762449516998,1762449516998,0,2,0,0,0,2,38} 17:18:39.674 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.674 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.675 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:39.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.675 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22394c62574667614152355349383471484c305f563167222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1762449519670,1762449519670,0,0,0,0,548,0,38} 17:18:39.679 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:39.679 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449519679, latencyMs=62, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:39.679 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:39.680 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:39.680 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37646-2","totalTimeMs":35.251,"requestQueueTimeMs":1.045,"localTimeMs":33.796,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.279,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.682 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(9LbWFgaAR5SI84qHL0_V1g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 17:18:39.682 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.683 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 17:18:39.684 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.686 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 72503231612 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.690 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 72503231612 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71519848934 17:18:39.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72594825812 17:18:39.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 17:18:39.692 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 72594825812 17:18:39.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 17:18:39.693 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 17:18:39.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 72594825812 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 72594825812 17:18:39.695 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73483882288 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73582181286 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73582181286 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73582181286 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72828908791 17:18:39.696 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73713554431 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73713554431 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73713554431 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75990183394 17:18:39.697 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79365994840 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79365994840 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.698 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79365994840 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76723644947 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80960239534 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80960239534 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80960239534 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83805857752 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86767701560 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86767701560 17:18:39.699 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86767701560 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86948419463 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87479992699 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87479992699 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87479992699 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85239636768 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85431805661 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.700 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85431805661 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85431805661 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87134710019 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90949167390 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90949167390 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90949167390 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90056977029 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91134917558 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.701 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91134917558 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91134917558 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91590635527 17:18:39.702 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92234705414 17:18:39.704 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 73582181286 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 73713554431 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 17:18:39.705 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 17:18:39.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 17:18:39.706 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.706 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 79365994840 17:18:39.706 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 17:18:39.706 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 17:18:39.706 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 17:18:39.706 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92234705414 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92234705414 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90063161505 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91074833800 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.707 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 80960239534 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 86767701560 17:18:39.708 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 17:18:39.707 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91074833800 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91074833800 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 93952319055 17:18:39.708 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98100184928 17:18:39.709 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.708 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 17:18:39.709 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 87479992699 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 17:18:39.709 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 17:18:39.709 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.709 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.709 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.709 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 85431805661 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 90949167390 17:18:39.710 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 91134917558 17:18:39.710 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 17:18:39.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 17:18:39.711 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 17:18:39.711 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.711 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 17:18:39.711 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 92234705414 17:18:39.711 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 17:18:39.711 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98100184928 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.711 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98100184928 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95560300438 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96977572446 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96977572446 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96977572446 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95953491629 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96718073227 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96718073227 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.712 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96718073227 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98921436264 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100553209468 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100553209468 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100553209468 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100829696396 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103230542236 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103230542236 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.713 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103230542236 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103448784576 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107511900108 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107511900108 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107511900108 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107426311579 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110919047890 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.714 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110919047890 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110919047890 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108701661167 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111817746855 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111817746855 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111817746855 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115215071980 17:18:39.715 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116942224712 17:18:39.715 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 91074833800 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 98100184928 17:18:39.716 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 17:18:39.716 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 17:18:39.716 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116942224712 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116942224712 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113341107438 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116645028930 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116645028930 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116645028930 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117236062739 17:18:39.717 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117404723846 17:18:39.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 17:18:39.718 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 96977572446 17:18:39.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 17:18:39.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 96718073227 17:18:39.719 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 100553209468 17:18:39.719 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 103230542236 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 17:18:39.719 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 17:18:39.719 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 107511900108 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 110919047890 17:18:39.720 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 17:18:39.720 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 111817746855 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 17:18:39.720 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.720 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 116942224712 17:18:39.720 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.721 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117404723846 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117404723846 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119653994465 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121942269416 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121942269416 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.721 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121942269416 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119148846034 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120444844620 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120444844620 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120444844620 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123661774462 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124245409450 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124245409450 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124245409450 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124734429691 17:18:39.722 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128138227077 17:18:39.722 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 116645028930 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 117404723846 17:18:39.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 17:18:39.723 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.723 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128138227077 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.723 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128138227077 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125941895008 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129599197790 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129599197790 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129599197790 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130345591077 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132011870988 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132011870988 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132011870988 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131602701106 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134539972218 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.724 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134539972218 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134539972218 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133549701577 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133613752696 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133613752696 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133613752696 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135775054613 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137518163332 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137518163332 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137518163332 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135910101639 17:18:39.725 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137568117390 17:18:39.725 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 17:18:39.725 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 121942269416 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 120444844620 17:18:39.726 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 124245409450 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 17:18:39.726 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 128138227077 17:18:39.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 17:18:39.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 17:18:39.727 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 129599197790 17:18:39.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 132011870988 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 134539972218 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 133613752696 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 137518163332 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 17:18:39.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 137568117390 17:18:39.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.732 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 137568117390 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.732 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.732 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.732 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 137568117390 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137249474474 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141514895564 17:18:39.733 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.733 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141514895564 17:18:39.733 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 17:18:39.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 17:18:39.733 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 17:18:39.733 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.733 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.734 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 17:18:39.734 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 141514895564 17:18:39.734 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.734 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 141514895564 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141969319421 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143214873158 17:18:39.734 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143214873158 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143214873158 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145340313577 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146612217463 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146612217463 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.735 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146612217463 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143046858550 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145102036749 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145102036749 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145102036749 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149005078371 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149214890776 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 17:18:39.737 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 143214873158 17:18:39.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 17:18:39.736 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149214890776 17:18:39.737 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149214890776 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148187910215 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149180628288 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.737 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149180628288 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149180628288 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147090545445 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148799675225 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148799675225 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148799675225 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151425259827 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152966113974 17:18:39.738 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.739 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 146612217463 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 17:18:39.739 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 145102036749 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 17:18:39.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 17:18:39.740 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 149214890776 17:18:39.740 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 17:18:39.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 17:18:39.739 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.740 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152966113974 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152966113974 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152208927821 17:18:39.740 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152793959135 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 149180628288 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.741 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 148799675225 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 17:18:39.741 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.742 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 152966113974 17:18:39.742 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 126,14 replyHeader:: 126,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 17:18:39.742 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.742 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152793959135 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152793959135 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152041456400 17:18:39.742 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155024133472 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 17:18:39.743 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 152793959135 17:18:39.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 17:18:39.743 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:18:39.743 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.743 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:39.743 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155024133472 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.743 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.743 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:39.743 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155024133472 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153003388987 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156458927587 17:18:39.744 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37648 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.744 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37648 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.744 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 17:18:39.744 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.744 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 155024133472 17:18:39.744 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 17:18:39.744 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156458927587 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.744 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.744 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed connection to node 1. Fetching API versions. 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.744 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 17:18:39.744 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.745 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156458927587 17:18:39.745 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160258897574 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162358646029 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162358646029 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 17:18:39.745 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 156458927587 17:18:39.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 17:18:39.745 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:39.745 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.745 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:39.745 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.745 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162358646029 17:18:39.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158762255703 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159415641684 17:18:39.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.746 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INITIAL 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159415641684 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 17:18:39.746 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:18:39.746 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 162358646029 17:18:39.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 17:18:39.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.746 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 17:18:39.746 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.747 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159415641684 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159599783555 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161072117097 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.747 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:39.747 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to COMPLETE 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.747 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:18:39.747 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161072117097 17:18:39.747 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating API versions fetch from node 1. 17:18:39.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 17:18:39.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:39.747 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 159415641684 17:18:39.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161072117097 17:18:39.747 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163123761158 17:18:39.747 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167196837864 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167196837864 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 17:18:39.748 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 161072117097 17:18:39.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 17:18:39.749 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:39.748 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167196837864 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167490890584 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170443402849 17:18:39.749 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170443402849 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.749 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170443402849 17:18:39.749 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.019,"requestQueueTimeMs":0.186,"localTimeMs":0.516,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.185,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:39.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 169721856837 17:18:39.749 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:39.749 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.749 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 167196837864 17:18:39.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 17:18:39.749 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173292592965 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173292592965 17:18:39.750 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173292592965 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174146791668 17:18:39.750 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174554052133 17:18:39.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 17:18:39.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 170443402849 17:18:39.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 17:18:39.751 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 17:18:39.751 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:39.751 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:39.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 17:18:39.751 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.415,"requestQueueTimeMs":0.12,"localTimeMs":1.095,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.058,"sendTimeMs":0.14,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.751 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:39.751 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 173292592965 17:18:39.751 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:39.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 17:18:39.752 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:39.752 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 174554052133 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 17:18:39.753 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.753 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 17:18:39.753 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 138,3 replyHeader:: 138,89,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:39.754 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.754 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.754 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.754 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 139,3 replyHeader:: 139,89,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:39.755 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:39.755 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:39.756 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:39.756 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449519756, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:39.756 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:39.756 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:39.756 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":4.065,"requestQueueTimeMs":0.092,"localTimeMs":3.675,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.206,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 174554052133 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 174554052133 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178601334445 17:18:39.768 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178828066115 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178828066115 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178828066115 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181273304507 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182181717810 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182181717810 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182181717810 17:18:39.769 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180242913533 17:18:39.770 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184504868711 17:18:39.770 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 17:18:39.770 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.770 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 178828066115 17:18:39.770 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184504868711 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184504868711 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181880294776 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185850563596 17:18:39.771 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 17:18:39.771 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185850563596 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.772 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.772 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 182181717810 17:18:39.772 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185850563596 17:18:39.772 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184000818324 17:18:39.772 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184028510236 17:18:39.773 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 184504868711 17:18:39.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184028510236 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.773 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184028510236 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187480233892 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187713402538 17:18:39.774 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187713402538 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187713402538 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191497842720 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 192940421576 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 192940421576 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.774 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 192940421576 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189212897346 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189834795983 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189834795983 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.775 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 185850563596 17:18:39.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189834795983 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190753067873 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193705082840 17:18:39.775 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.775 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 17:18:39.776 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 17:18:39.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.776 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 184028510236 17:18:39.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193705082840 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.776 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193705082840 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193343628774 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194856754645 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194856754645 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.777 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194856754645 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195533579416 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198842105965 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198842105965 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198842105965 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197644139960 17:18:39.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 17:18:39.777 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198224050116 17:18:39.778 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 187713402538 17:18:39.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198224050116 17:18:39.778 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 17:18:39.778 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.779 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.779 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 192940421576 17:18:39.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 17:18:39.779 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.779 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.779 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 17:18:39.779 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 189834795983 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 17:18:39.780 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198224050116 17:18:39.780 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198463660086 17:18:39.780 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 193705082840 17:18:39.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 17:18:39.780 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151853209 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151853209 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.781 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199151853209 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198842963687 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202401945842 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202401945842 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.781 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 194856754645 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 17:18:39.782 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202401945842 17:18:39.782 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204612990725 17:18:39.782 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204725091345 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 17:18:39.782 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 198842105965 17:18:39.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.783 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 198224050116 17:18:39.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204725091345 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.783 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204725091345 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202514308644 17:18:39.783 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205339353971 17:18:39.783 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.784 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 199151853209 17:18:39.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205339353971 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.784 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205339353971 17:18:39.784 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204574395237 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205444062966 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205444062966 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.785 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 202401945842 17:18:39.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205444062966 17:18:39.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 17:18:39.785 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205819864708 17:18:39.785 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 17:18:39.786 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205850612912 17:18:39.786 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 204725091345 17:18:39.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 17:18:39.786 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.786 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.786 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 17:18:39.786 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.787 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 205339353971 17:18:39.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205850612912 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205850612912 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206514856429 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209329910327 17:18:39.787 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.787 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209329910327 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.787 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209329910327 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208644701420 17:18:39.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 205444062966 17:18:39.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208686686967 17:18:39.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 17:18:39.788 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.788 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 17:18:39.789 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 205850612912 17:18:39.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208686686967 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.789 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208686686967 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209448491593 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213658231726 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213658231726 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.789 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213658231726 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214508149948 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214989576037 17:18:39.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.790 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 209329910327 17:18:39.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214989576037 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.790 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 17:18:39.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 17:18:39.790 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.791 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 208686686967 17:18:39.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214989576037 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214548249063 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217963966637 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.791 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217963966637 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.791 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217963966637 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219541926695 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 222344619675 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 222344619675 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 222344619675 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223967877760 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226452412755 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226452412755 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.792 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.792 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 17:18:39.793 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.793 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.793 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.793 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 213658231726 17:18:39.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 17:18:39.793 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226452412755 17:18:39.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 17:18:39.793 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224854445440 17:18:39.793 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 17:18:39.793 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 214989576037 17:18:39.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228696138358 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228696138358 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228696138358 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 225423548404 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228378035721 17:18:39.794 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228378035721 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 17:18:39.794 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.795 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 217963966637 17:18:39.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228378035721 17:18:39.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229601707911 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231181868544 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.795 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.795 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.796 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 222344619675 17:18:39.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 17:18:39.796 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231181868544 17:18:39.796 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 226452412755 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 228696138358 17:18:39.797 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 17:18:39.796 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.797 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.797 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.797 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.797 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231181868544 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228484731437 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232720570574 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.798 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232720570574 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.798 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232720570574 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 235384014001 17:18:39.798 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236187578125 17:18:39.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 17:18:39.798 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 228378035721 17:18:39.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.799 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 231181868544 17:18:39.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236187578125 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236187578125 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236396087857 17:18:39.799 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236869964948 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.799 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236869964948 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.799 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236869964948 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 232720570574 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237114442888 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238968445537 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.800 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 236187578125 17:18:39.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.800 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238968445537 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238968445537 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236305133150 17:18:39.801 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238901371039 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238901371039 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238901371039 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241606765770 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242471294058 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.801 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242471294058 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 236869964948 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242471294058 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243428659688 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245158073962 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.802 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.802 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.802 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 238968445537 17:18:39.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245158073962 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.803 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 17:18:39.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.803 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 238901371039 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245158073962 17:18:39.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246146896872 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249956301181 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.803 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249956301181 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.804 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249956301181 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252661689680 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254154397814 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254154397814 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254154397814 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255752234049 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 256370747739 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 256370747739 17:18:39.804 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 256370747739 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255276990711 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259323398508 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259323398508 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259323398508 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258229567056 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261305363914 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261305363914 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 17:18:39.805 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261305363914 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 242471294058 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 263730134338 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266599476907 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.806 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 245158073962 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.806 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.806 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.806 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 17:18:39.807 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 249956301181 17:18:39.807 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266599476907 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266599476907 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264161222435 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264979593094 17:18:39.807 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264979593094 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.807 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264979593094 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262364960109 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265005885787 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265005885787 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.808 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 254154397814 17:18:39.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265005885787 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265100804916 17:18:39.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 17:18:39.808 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268484331001 17:18:39.808 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 256370747739 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268484331001 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268484331001 17:18:39.809 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 17:18:39.809 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270906078561 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.809 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 259323398508 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272643057438 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.810 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 261305363914 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272643057438 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.810 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 17:18:39.810 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 266599476907 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272643057438 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 274500556470 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278032248985 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.811 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278032248985 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 264979593094 17:18:39.811 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 17:18:39.811 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278032248985 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 277474588018 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280524548471 17:18:39.812 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280524548471 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280524548471 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 279932710590 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283150099410 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283150099410 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.812 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283150099410 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280744211562 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283050458660 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283050458660 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283050458660 17:18:39.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 284958289564 17:18:39.813 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 265005885787 17:18:39.813 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 289147096850 17:18:39.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 17:18:39.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 17:18:39.814 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 268484331001 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 17:18:39.814 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 272643057438 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 17:18:39.814 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 278032248985 17:18:39.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 17:18:39.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xba zxid:0x88 txntype:14 reqpath:n/a 17:18:39.815 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 17:18:39.815 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 280524548471 17:18:39.815 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xba zxid:0x88 txntype:14 reqpath:n/a 17:18:39.815 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xbb zxid:0x89 txntype:14 reqpath:n/a 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 283150099410 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xbb zxid:0x89 txntype:14 reqpath:n/a 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xbc zxid:0x8a txntype:14 reqpath:n/a 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.816 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 283050458660 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xbc zxid:0x8a txntype:14 reqpath:n/a 17:18:39.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:multi cxid:0xbd zxid:0x8b txntype:14 reqpath:n/a 17:18:39.816 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 17:18:39.817 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:18:39.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 289147096850 17:18:39.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:multi cxid:0xbd zxid:0x8b txntype:14 reqpath:n/a 17:18:39.817 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 17:18:39.827 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.827 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.827 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.827 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.827 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.828 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.829 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.830 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 17:18:39.831 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 17:18:39.831 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 17:18:39.833 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=9LbWFgaAR5SI84qHL0_V1g, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=38099)]) 17:18:39.834 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:39.836 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 17:18:39.852 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:39.852 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.854 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:39.854 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:39.855 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:39.855 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.787,"requestQueueTimeMs":0.163,"localTimeMs":1.317,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.085,"sendTimeMs":0.22,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.855 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:39.855 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:39.857 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.857 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.858 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 190,3 replyHeader:: 190,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:39.858 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.859 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 191,3 replyHeader:: 191,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:39.859 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:39.859 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:39.860 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:39.860 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449519860, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:39.860 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:39.860 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:39.860 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":4.61,"requestQueueTimeMs":0.09,"localTimeMs":4.313,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.055,"sendTimeMs":0.15,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.864 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 17:18:39.864 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 17:18:39.865 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.865 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.865 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 192,4 replyHeader:: 192,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.868 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.868 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.868 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.868 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.868 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.868 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.869 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.869 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.870 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 17:18:39.870 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 17:18:39.870 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.870 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.874 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.875 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 193,4 replyHeader:: 193,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.877 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.877 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.877 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.877 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.878 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.878 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.879 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.879 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.879 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 17:18:39.879 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 17:18:39.879 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.879 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.884 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.885 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 194,4 replyHeader:: 194,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.886 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.887 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.887 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.887 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.887 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.887 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.888 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.888 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.888 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 17:18:39.888 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 17:18:39.888 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.888 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.893 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.893 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.895 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.895 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.895 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.895 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.895 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.896 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.896 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.896 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.896 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 17:18:39.897 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 17:18:39.897 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.897 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.900 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.900 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 196,4 replyHeader:: 196,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.902 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.902 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.902 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.902 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.903 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.903 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.903 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.904 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.904 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 17:18:39.904 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 17:18:39.904 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.904 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.908 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.909 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.911 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.911 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.911 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.911 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.912 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.912 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.912 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.913 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.913 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 17:18:39.913 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 17:18:39.913 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.913 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.918 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.918 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.919 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 198,4 replyHeader:: 198,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.922 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.922 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.922 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.922 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.923 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.923 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.923 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 17:18:39.923 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 17:18:39.923 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.924 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.928 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.929 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 199,4 replyHeader:: 199,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.931 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.931 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.931 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.931 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.932 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.932 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.932 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 17:18:39.932 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 17:18:39.932 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.932 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.937 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.937 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.939 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.939 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.939 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.939 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.940 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.940 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.940 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.940 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.941 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 17:18:39.941 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 17:18:39.941 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.941 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.955 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:39.955 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:39.957 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:39.957 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:39.957 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:39.957 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:39.957 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:39.958 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.384,"requestQueueTimeMs":0.169,"localTimeMs":0.747,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.11,"sendTimeMs":0.356,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.959 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.959 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.959 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:39.960 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 201,3 replyHeader:: 201,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:39.960 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.961 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.961 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:39.961 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 202,3 replyHeader:: 202,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:39.961 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:39.961 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:39.962 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:39.962 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449519962, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:39.962 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:39.962 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:39.963 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":4.001,"requestQueueTimeMs":0.145,"localTimeMs":3.474,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.289,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:39.978 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.979 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 203,4 replyHeader:: 203,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.982 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.983 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.983 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.983 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.984 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.984 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.985 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.986 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.986 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 17:18:39.986 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 17:18:39.986 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:39.986 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:39.995 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:39.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:39.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:39.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:39.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:39.996 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:39.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:39.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 17:18:39.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:39.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:39.998 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:39.998 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:39.999 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:39.999 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:39.999 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 17:18:39.999 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 17:18:39.999 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.000 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.005 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.005 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.008 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.009 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.009 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.010 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.010 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 17:18:40.010 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 17:18:40.010 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.010 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.013 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.014 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.016 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.016 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.017 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.017 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.017 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 17:18:40.017 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 17:18:40.017 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.018 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.022 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.022 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.024 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.024 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.024 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.024 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.024 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.025 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.025 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.025 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.026 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 17:18:40.026 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 17:18:40.026 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.026 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.031 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.031 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.033 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.033 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.033 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.033 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.034 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.034 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.034 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.035 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.035 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 17:18:40.035 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 17:18:40.035 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.035 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.039 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.040 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.042 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.042 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.042 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.042 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.042 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.042 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.043 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.043 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.043 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 17:18:40.043 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 17:18:40.043 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.043 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.048 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.049 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.051 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.052 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.052 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.052 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.052 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 17:18:40.053 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 17:18:40.053 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.053 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.057 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:40.057 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:40.059 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.059 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:40.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.059 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.059 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:40.060 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.289,"requestQueueTimeMs":0.179,"localTimeMs":0.82,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.059,"sendTimeMs":0.23,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.060 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:40.060 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 211,4 replyHeader:: 211,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.060 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:40.060 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:40.061 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.061 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.061 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.062 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 212,3 replyHeader:: 212,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:40.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.062 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.062 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.063 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 213,3 replyHeader:: 213,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:40.063 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.063 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:40.063 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.063 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:40.064 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:40.064 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449520064, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:40.064 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":3.461,"requestQueueTimeMs":0.101,"localTimeMs":3.132,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.057,"sendTimeMs":0.17,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.064 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:40.064 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:40.064 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.065 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.065 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 17:18:40.065 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 17:18:40.065 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.065 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.070 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.070 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.072 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.073 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.073 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.073 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.073 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.073 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.074 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.074 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.074 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 17:18:40.074 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 17:18:40.074 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.074 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.103 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.103 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.107 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.107 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.108 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.108 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.108 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 17:18:40.108 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 17:18:40.108 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.108 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.127 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.127 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.127 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.127 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.127 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.127 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.127 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.130 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.130 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.131 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.131 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.132 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.132 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 17:18:40.132 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 17:18:40.132 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.132 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.151 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.151 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.154 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.154 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.154 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.154 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.155 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.155 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.156 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.156 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.156 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 17:18:40.156 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 17:18:40.156 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.156 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.160 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:40.160 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:40.161 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.161 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.161 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.161 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.161 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.161 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.162 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:40.162 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.175,"requestQueueTimeMs":0.144,"localTimeMs":0.836,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.052,"sendTimeMs":0.141,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.162 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.162 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:40.162 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:40.162 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:40.162 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:40.164 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.164 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 219,3 replyHeader:: 219,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:40.165 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.165 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.165 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.165 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.165 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.165 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.165 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 220,3 replyHeader:: 220,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:40.166 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.166 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:40.166 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.166 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:40.166 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.166 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 17:18:40.166 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 17:18:40.166 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.167 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.167 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:40.167 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":3.812,"requestQueueTimeMs":0.071,"localTimeMs":3.571,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.054,"sendTimeMs":0.115,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.167 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449520167, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:40.167 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:40.167 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:40.171 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.171 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 221,4 replyHeader:: 221,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.173 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.173 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.173 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.174 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.174 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.174 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.175 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.175 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 17:18:40.175 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 17:18:40.175 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.175 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.178 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.179 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.179 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.179 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.179 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 222,4 replyHeader:: 222,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.180 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.180 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.181 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.181 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.181 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.181 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.181 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.182 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.182 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 17:18:40.182 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 17:18:40.182 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.182 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.186 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.186 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.188 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.188 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.188 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.188 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.188 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.188 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.189 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.189 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.189 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 17:18:40.189 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 17:18:40.189 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.189 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.193 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.194 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.195 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.195 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.196 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.196 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.196 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.196 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.196 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.197 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.197 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 17:18:40.197 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 17:18:40.197 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.197 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.202 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.203 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.205 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.205 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.205 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.205 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.205 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.205 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.206 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.206 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.206 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 17:18:40.206 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 17:18:40.206 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.206 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.210 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.211 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.213 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.213 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.213 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.213 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.213 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.213 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.214 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.214 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.214 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 17:18:40.214 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 17:18:40.214 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.214 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.234 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.234 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.235 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.235 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.235 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.235 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.235 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 227,4 replyHeader:: 227,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.238 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.238 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.238 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.238 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.238 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.239 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.239 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.240 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.240 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 17:18:40.240 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 17:18:40.240 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.240 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.244 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.245 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 228,4 replyHeader:: 228,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.246 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.246 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.247 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.247 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.247 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.247 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.247 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.248 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.248 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 17:18:40.248 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 17:18:40.248 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.248 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.252 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.252 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 229,4 replyHeader:: 229,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.254 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.254 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.254 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.254 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.254 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.254 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.255 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.255 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.255 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 17:18:40.255 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 17:18:40.255 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.255 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.262 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:40.262 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:40.264 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:40.264 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:40.264 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.454,"requestQueueTimeMs":0.209,"localTimeMs":0.938,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.073,"sendTimeMs":0.232,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.264 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:40.265 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:40.265 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:40.266 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.267 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 230,3 replyHeader:: 230,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:40.268 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.268 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.268 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.268 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 231,3 replyHeader:: 231,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:40.268 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:40.268 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:40.269 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:40.269 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":3.741,"requestQueueTimeMs":0.078,"localTimeMs":3.492,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.05,"sendTimeMs":0.119,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.269 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449520269, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:40.269 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:40.269 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:40.270 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.271 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.273 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.273 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.273 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.273 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.274 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.274 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.274 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.275 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.275 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 17:18:40.275 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 17:18:40.275 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.275 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.279 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.280 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.282 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.282 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.282 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.283 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.283 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.283 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 17:18:40.283 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 17:18:40.283 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.284 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.287 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.287 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.289 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.289 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.289 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.290 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.290 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.290 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.291 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.291 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.291 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 17:18:40.291 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 17:18:40.291 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.291 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.295 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.296 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.298 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.298 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.298 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.299 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.299 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.299 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 17:18:40.299 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 17:18:40.299 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.299 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.304 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.304 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 236,4 replyHeader:: 236,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.307 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.308 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.308 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.308 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.308 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.309 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.309 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.310 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.310 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 17:18:40.310 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 17:18:40.310 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.310 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.314 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.314 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 237,4 replyHeader:: 237,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.316 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.316 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.316 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.316 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.316 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 17:18:40.316 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 17:18:40.316 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.316 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.321 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.321 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 238,4 replyHeader:: 238,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.323 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.323 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.323 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.323 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.323 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.323 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.324 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.324 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.324 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 17:18:40.324 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 17:18:40.324 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.324 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.346 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.347 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.350 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.350 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.350 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.351 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.351 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.351 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 17:18:40.351 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 17:18:40.351 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.351 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.363 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.363 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.365 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:40.365 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:40.366 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.366 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.366 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.367 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.367 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.367 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.368 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:40.368 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.52,"requestQueueTimeMs":0.197,"localTimeMs":1.059,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.075,"sendTimeMs":0.187,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.368 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:40.368 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:40.368 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:40.369 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:40.370 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.370 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.370 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 17:18:40.370 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 17:18:40.370 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.370 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.371 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.371 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.371 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 17:18:40.371 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 241,3 replyHeader:: 241,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 17:18:40.372 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:exists cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:exists cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 17:18:40.372 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 242,3 replyHeader:: 242,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1762449519670,1762449519670,0,1,0,0,548,1,39} 17:18:40.373 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 17:18:40.373 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 17:18:40.373 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 17:18:40.373 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449520373, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 17:18:40.374 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator lookup failed: 17:18:40.374 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":4.269,"requestQueueTimeMs":0.162,"localTimeMs":3.941,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.049,"sendTimeMs":0.115,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.374 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 17:18:40.374 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.374 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 243,4 replyHeader:: 243,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.375 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.376 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.376 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.376 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.376 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.376 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.376 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.376 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 17:18:40.376 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 17:18:40.376 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.377 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.381 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.381 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.381 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.382 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.382 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.382 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.382 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 244,4 replyHeader:: 244,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.383 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.383 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.384 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.384 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.384 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.384 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.384 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.384 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.385 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 17:18:40.385 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 17:18:40.385 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.385 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.390 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.390 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.392 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.392 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.393 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.393 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.393 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 17:18:40.393 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 17:18:40.393 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.393 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.397 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.397 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.399 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.399 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.399 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.399 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.399 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.399 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.400 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.400 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.400 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 17:18:40.400 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 17:18:40.400 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.400 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.404 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.405 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 247,4 replyHeader:: 247,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.406 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.406 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.406 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.406 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.407 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.407 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.407 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.407 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.407 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 17:18:40.408 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 17:18:40.408 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.408 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.411 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.412 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.412 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.412 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 248,4 replyHeader:: 248,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.413 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.413 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.414 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.414 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.414 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.414 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.414 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.414 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.414 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 17:18:40.415 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 17:18:40.415 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.415 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.419 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.419 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.421 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.421 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.421 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.421 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.421 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.421 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.422 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.422 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.422 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 17:18:40.422 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 17:18:40.422 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.422 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.426 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.426 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.426 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.428 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.428 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.428 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.428 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.428 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.428 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.429 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.429 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.429 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 17:18:40.429 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 17:18:40.429 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.429 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.432 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:40.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 17:18:40.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:40.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:40.432 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:40.433 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1762449519662,1762449519662,0,0,0,0,109,0,37} 17:18:40.434 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 17:18:40.434 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 17:18:40.434 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 17:18:40.434 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 17:18:40.435 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit7122242531084360278] Loading producer state till offset 0 with message format version 2 17:18:40.435 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 17:18:40.435 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 17:18:40.435 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit7122242531084360278/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 17:18:40.435 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 17:18:40.435 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 17:18:40.435 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(9LbWFgaAR5SI84qHL0_V1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 17:18:40.436 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 17:18:40.442 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 17:18:40.443 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 17:18:40.444 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 17:18:40.445 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 17:18:40.446 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 17:18:40.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 17:18:40.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 17:18:40.447 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 17:18:40.448 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 17:18:40.448 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 17:18:40.449 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 17:18:40.449 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 17:18:40.450 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 17:18:40.450 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 17:18:40.450 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 17:18:40.450 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 17:18:40.450 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.450 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 17:18:40.450 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.450 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 614ms correlationId 3 from controller 1 for 50 partitions 17:18:40.450 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 17:18:40.450 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.450 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 17:18:40.450 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.450 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.451 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.451 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.451 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.451 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.451 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 17:18:40.451 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=9LbWFgaAR5SI84qHL0_V1g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 17:18:40.451 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 17:18:40.452 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 17:18:40.452 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 17:18:40.452 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 17:18:40.452 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 17:18:40.452 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.452 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=9LbWFgaAR5SI84qHL0_V1g, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38099, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 17:18:40.452 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 17:18:40.453 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"9LbWFgaAR5SI84qHL0_V1g","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":38099}]},"response":{"errorCode":0,"topics":[{"topicId":"9LbWFgaAR5SI84qHL0_V1g","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37636-0","totalTimeMs":615.844,"requestQueueTimeMs":0.851,"localTimeMs":614.7,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.1,"sendTimeMs":0.192,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:40.453 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.453 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 17:18:40.453 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.453 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 17:18:40.453 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.453 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 17:18:40.453 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.453 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 17:18:40.453 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.453 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 17:18:40.454 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.454 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 17:18:40.455 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 17:18:40.455 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 17:18:40.455 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.455 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.456 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"9LbWFgaAR5SI84qHL0_V1g","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":38099,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37636-0","totalTimeMs":1.732,"requestQueueTimeMs":0.478,"localTimeMs":1.091,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.047,"sendTimeMs":0.115,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 17:18:40.456 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.456 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 17:18:40.457 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.457 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 17:18:40.458 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.458 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 17:18:40.458 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 17:18:40.458 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 17:18:40.458 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 17:18:40.468 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: 1 rack: null) 17:18:40.468 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:40.471 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:40.471 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":1.472,"requestQueueTimeMs":0.134,"localTimeMs":1.163,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.058,"sendTimeMs":0.115,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.471 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 17:18:40.471 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:40.471 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FindCoordinator request to broker localhost:38099 (id: 1 rack: null) 17:18:40.471 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 17:18:40.474 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=38099, errorCode=0, errorMessage='')]) 17:18:40.474 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":38099,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":2.392,"requestQueueTimeMs":0.08,"localTimeMs":2.162,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.046,"sendTimeMs":0.103,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.474 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1762449520474, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=38099, errorCode=0, errorMessage='')])) 17:18:40.475 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Discovered group coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:40.475 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:40.475 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 2147483646 rack: null) using address localhost/127.0.0.1 17:18:40.475 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:40.475 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:40.475 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37650 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:40.475 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37650 17:18:40.479 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 17:18:40.479 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Marking assigned partitions pending for revocation: [] 17:18:40.479 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Heartbeat thread started 17:18:40.481 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 17:18:40.484 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 17:18:40.484 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:40.484 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 17:18:40.484 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:40.484 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:40.484 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] (Re-)joining group 17:18:40.485 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Joining group with current subscription: [my-test-topic] 17:18:40.485 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:40.490 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:40.491 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:40.491 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:40.491 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:40.491 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:40.492 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:40.494 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INITIAL 17:18:40.494 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to INTERMEDIATE 17:18:40.494 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 17:18:40.494 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:40.494 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:40.494 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:40.494 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to COMPLETE 17:18:40.494 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 17:18:40.494 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 17:18:40.494 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating API versions fetch from node 2147483646. 17:18:40.494 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=21) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:40.497 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":21,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":1.016,"requestQueueTimeMs":0.225,"localTimeMs":0.551,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.064,"sendTimeMs":0.174,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:40.497 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=21): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:40.497 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:40.497 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=20) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 17:18:40.509 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 and request the member to rejoin with this id. 17:18:40.514 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=20): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', members=[]) 17:18:40.514 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 17:18:40.514 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 17:18:40.515 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:18:40.515 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] (Re-)joining group 17:18:40.515 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Joining group with current subscription: [my-test-topic] 17:18:40.515 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:40.515 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=22) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 17:18:40.516 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":20,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","members":[]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":15.735,"requestQueueTimeMs":3.068,"localTimeMs":12.108,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.217,"sendTimeMs":0.34,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:40.520 [data-plane-kafka-request-handler-1] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 joins group mso-group in Empty state. Adding to the group now. 17:18:40.523 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:40.525 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 17:18:43.533 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 17:18:43.536 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:43.537 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=22): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', skipAssignment=false, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 17:18:43.537 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', skipAssignment=false, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 17:18:43.537 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Enabling heartbeat thread 17:18:43.537 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', protocol='range'} 17:18:43.538 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":22,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","skipAssignment":false,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","members":[{"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":3019.907,"requestQueueTimeMs":0.265,"localTimeMs":8.368,"remoteTimeMs":3010.033,"throttleTimeMs":0,"responseQueueTimeMs":0.119,"sendTimeMs":1.121,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:43.538 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 17:18:43.541 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183=Assignment(partitions=[my-test-topic-0])} 17:18:43.545 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:38099 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 17:18:43.546 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=23) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 17:18:43.555 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 17:18:43.555 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 for group mso-group for generation 1. The group has 1 members, 0 of which are static. 17:18:43.604 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1762449520188, current time: 1762449523604,unflushed: 1 17:18:43.674 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 17:18:43.677 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 94 ms 17:18:43.686 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:43.687 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=23): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 17:18:43.687 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 17:18:43.687 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', protocol='range'} 17:18:43.687 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 17:18:43.687 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 17:18:43.687 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":23,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":138.353,"requestQueueTimeMs":3.598,"localTimeMs":133.807,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.367,"sendTimeMs":0.579,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:43.691 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 17:18:43.695 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 17:18:43.698 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=24) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 17:18:43.712 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=24): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 17:18:43.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":24,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":13.767,"requestQueueTimeMs":3.489,"localTimeMs":9.991,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.196,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:43.712 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Found no committed offset for partition my-test-topic-0 17:18:43.715 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:38099 (id: 1 rack: null) 17:18:43.716 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=25) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 17:18:43.746 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=25): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 17:18:43.746 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 17:18:43.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":25,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":16.194,"requestQueueTimeMs":2.321,"localTimeMs":13.459,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.103,"sendTimeMs":0.309,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:43.748 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 17:18:43.748 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}}. 17:18:43.752 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:43.753 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 17:18:43.754 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:43.756 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=26) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:18:43.765 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 17:18:43.843 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 17:18:43.852 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 17:18:43.853 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 17:18:44.303 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=816286608, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1762449524299, lastUsedMs=1762449524299, epoch=1) 17:18:44.307 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 816286608 returning 1 partition(s) 17:18:44.315 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":26,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"7trtN51tR76cWI19Q3IoTQ","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[{"topicId":"7trtN51tR76cWI19Q3IoTQ","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":557.777,"requestQueueTimeMs":4.753,"localTimeMs":29.762,"remoteTimeMs":522.921,"throttleTimeMs":0,"responseQueueTimeMs":0.096,"sendTimeMs":0.243,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:44.315 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=26): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[FetchableTopicResponse(topic='', topicId=7trtN51tR76cWI19Q3IoTQ, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 17:18:44.318 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 816286608 with 1 response partition(s) 17:18:44.320 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 17:18:44.324 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:44.324 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:44.324 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:44.324 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=27) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 17:18:44.329 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:44.837 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:44.838 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=27): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:44.839 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:44.839 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":27,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":513.444,"requestQueueTimeMs":0.206,"localTimeMs":8.27,"remoteTimeMs":504.36,"throttleTimeMs":0,"responseQueueTimeMs":0.228,"sendTimeMs":0.379,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:44.840 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:44.840 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:44.840 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:44.840 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=28) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 17:18:44.841 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:45.344 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:45.345 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=28): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:45.345 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:45.346 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:45.346 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":28,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.231,"requestQueueTimeMs":0.313,"localTimeMs":1.496,"remoteTimeMs":501.828,"throttleTimeMs":0,"responseQueueTimeMs":0.188,"sendTimeMs":0.404,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:45.346 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:45.346 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:45.346 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=29) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 17:18:45.347 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:45.850 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:45.851 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=29): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:45.852 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:45.852 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":29,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.304,"requestQueueTimeMs":0.232,"localTimeMs":1.456,"remoteTimeMs":501.91,"throttleTimeMs":0,"responseQueueTimeMs":0.196,"sendTimeMs":0.508,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:45.852 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:45.852 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:45.852 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:45.852 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=30) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 17:18:45.853 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:46.355 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:46.356 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=30): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:46.357 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:46.357 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":30,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.194,"requestQueueTimeMs":0.18,"localTimeMs":1.243,"remoteTimeMs":501.184,"throttleTimeMs":0,"responseQueueTimeMs":0.183,"sendTimeMs":0.402,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:46.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:46.357 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:46.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:46.357 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=31) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 17:18:46.358 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:46.538 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:46.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=32) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null) 17:18:46.545 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:46.549 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=32): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:18:46.549 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful Heartbeat response 17:18:46.549 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":32,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":7.749,"requestQueueTimeMs":1.751,"localTimeMs":5.743,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.087,"sendTimeMs":0.167,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:46.860 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:46.862 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=31): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:46.862 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:46.862 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":31,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.304,"requestQueueTimeMs":0.181,"localTimeMs":1.169,"remoteTimeMs":501.398,"throttleTimeMs":0,"responseQueueTimeMs":0.198,"sendTimeMs":0.356,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:46.863 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:46.863 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:46.864 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:46.864 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 17:18:46.865 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:47.368 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:47.369 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:47.369 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:47.370 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.312,"requestQueueTimeMs":0.208,"localTimeMs":1.626,"remoteTimeMs":501.973,"throttleTimeMs":0,"responseQueueTimeMs":0.157,"sendTimeMs":0.345,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:47.370 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:47.370 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:47.370 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:47.370 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 17:18:47.372 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:47.874 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:47.875 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:47.875 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:47.875 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.85,"requestQueueTimeMs":0.209,"localTimeMs":1.575,"remoteTimeMs":501.625,"throttleTimeMs":0,"responseQueueTimeMs":0.132,"sendTimeMs":0.307,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:47.876 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:47.876 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:47.876 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:47.876 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 17:18:47.877 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:48.378 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:48.380 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:48.380 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":502.761,"requestQueueTimeMs":0.176,"localTimeMs":0.853,"remoteTimeMs":501.27,"throttleTimeMs":0,"responseQueueTimeMs":0.143,"sendTimeMs":0.318,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:48.380 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:48.381 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:48.381 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:48.381 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:48.381 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=36) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 17:18:48.382 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:48.689 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:48.691 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=37) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:18:48.702 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:48.710 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1762449523674, current time: 1762449528710,unflushed: 1 17:18:48.716 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 17:18:48.716 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 7 ms 17:18:48.724 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=37): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:18:48.724 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":37,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":31.604,"requestQueueTimeMs":4.456,"localTimeMs":26.66,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.142,"sendTimeMs":0.345,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:48.724 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:18:48.724 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:48.884 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:48.884 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=36): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:48.885 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:48.885 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":36,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":502.623,"requestQueueTimeMs":0.166,"localTimeMs":0.7,"remoteTimeMs":501.295,"throttleTimeMs":0,"responseQueueTimeMs":0.152,"sendTimeMs":0.307,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:48.885 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:48.886 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:48.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:48.886 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=38) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 17:18:48.887 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:49.390 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:49.391 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=38): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:49.391 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":38,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.311,"requestQueueTimeMs":0.176,"localTimeMs":1.135,"remoteTimeMs":502.465,"throttleTimeMs":0,"responseQueueTimeMs":0.213,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:49.392 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:49.392 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:49.392 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:49.393 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:49.393 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 17:18:49.395 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:49.539 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:49.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=40) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null) 17:18:49.541 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:49.542 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=40): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:18:49.542 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful Heartbeat response 17:18:49.543 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":40,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":1.606,"requestQueueTimeMs":0.258,"localTimeMs":0.986,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.102,"sendTimeMs":0.257,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:49.896 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:49.897 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:49.898 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:49.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.087,"requestQueueTimeMs":0.258,"localTimeMs":1.436,"remoteTimeMs":500.943,"throttleTimeMs":0,"responseQueueTimeMs":0.128,"sendTimeMs":0.321,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:49.899 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:49.899 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:49.899 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:49.899 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 17:18:49.900 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:50.402 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:50.403 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:50.404 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:50.404 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:50.404 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:50.404 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.908,"requestQueueTimeMs":0.2,"localTimeMs":1.595,"remoteTimeMs":501.584,"throttleTimeMs":0,"responseQueueTimeMs":0.206,"sendTimeMs":0.321,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:50.404 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:50.404 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 17:18:50.405 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:50.444 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:50.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:18:50.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:18:50.445 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002daa60000 after 1ms. 17:18:50.908 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:50.909 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:50.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:50.910 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.601,"requestQueueTimeMs":0.215,"localTimeMs":1.415,"remoteTimeMs":502.066,"throttleTimeMs":0,"responseQueueTimeMs":0.334,"sendTimeMs":0.569,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:50.910 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:50.911 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:50.911 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:50.911 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=43) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 17:18:50.913 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:51.415 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:51.416 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=43): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:51.417 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:51.417 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:51.417 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":43,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.891,"requestQueueTimeMs":0.325,"localTimeMs":1.883,"remoteTimeMs":501.777,"throttleTimeMs":0,"responseQueueTimeMs":0.279,"sendTimeMs":0.626,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:51.418 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:51.418 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:51.418 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=44) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 17:18:51.420 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:51.921 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:51.922 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=44): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:51.923 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":44,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.279,"requestQueueTimeMs":0.32,"localTimeMs":1.35,"remoteTimeMs":501.253,"throttleTimeMs":0,"responseQueueTimeMs":0.137,"sendTimeMs":0.218,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:51.923 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:51.924 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:51.924 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:51.924 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:51.924 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 17:18:51.925 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:52.428 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:52.429 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:52.429 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.736,"requestQueueTimeMs":0.205,"localTimeMs":1.644,"remoteTimeMs":501.44,"throttleTimeMs":0,"responseQueueTimeMs":0.139,"sendTimeMs":0.305,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:52.429 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:52.430 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:52.430 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:52.430 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:52.431 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=46) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 17:18:52.432 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:52.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:52.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=47) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null) 17:18:52.541 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:52.542 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=47): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:18:52.542 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":47,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":1.252,"requestQueueTimeMs":0.248,"localTimeMs":0.777,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.085,"sendTimeMs":0.14,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:52.543 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful Heartbeat response 17:18:52.703 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.708 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.708 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.709 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.710 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.711 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.712 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.713 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.714 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1762449532697 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 17:18:52.935 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:52.937 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=46): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:52.937 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:52.937 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":46,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.915,"requestQueueTimeMs":0.299,"localTimeMs":1.477,"remoteTimeMs":502.589,"throttleTimeMs":0,"responseQueueTimeMs":0.203,"sendTimeMs":0.344,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:52.937 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:52.938 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:52.938 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:52.938 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 17:18:52.939 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:53.441 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:53.442 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:53.442 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:53.443 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:53.443 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:53.443 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.033,"requestQueueTimeMs":0.245,"localTimeMs":1.45,"remoteTimeMs":500.74,"throttleTimeMs":0,"responseQueueTimeMs":0.204,"sendTimeMs":0.392,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:53.443 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:53.443 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 17:18:53.444 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:53.689 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:53.689 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=50) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:18:53.691 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:53.693 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1762449528716, current time: 1762449533693,unflushed: 1 17:18:53.697 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 17:18:53.698 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 6 ms 17:18:53.699 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=50): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:18:53.699 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":50,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":9.179,"requestQueueTimeMs":0.204,"localTimeMs":8.616,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.108,"sendTimeMs":0.249,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:53.699 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:18:53.700 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:53.947 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:53.948 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:53.949 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:53.949 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.406,"requestQueueTimeMs":0.238,"localTimeMs":1.549,"remoteTimeMs":502.057,"throttleTimeMs":0,"responseQueueTimeMs":0.208,"sendTimeMs":0.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:53.949 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:53.949 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:53.950 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:53.950 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 17:18:53.951 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:54.452 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:54.454 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:54.454 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:54.454 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":502.837,"requestQueueTimeMs":0.184,"localTimeMs":1.122,"remoteTimeMs":500.84,"throttleTimeMs":0,"responseQueueTimeMs":0.224,"sendTimeMs":0.465,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:54.454 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:54.457 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:54.457 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:54.457 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=52) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 17:18:54.459 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:54.961 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:54.962 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=52): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:54.962 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:54.963 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:54.963 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:54.963 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:54.963 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":52,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.051,"requestQueueTimeMs":0.251,"localTimeMs":1.015,"remoteTimeMs":501.267,"throttleTimeMs":0,"responseQueueTimeMs":0.207,"sendTimeMs":1.309,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:54.963 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 17:18:54.964 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:55.467 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:55.468 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:55.468 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.249,"requestQueueTimeMs":0.193,"localTimeMs":0.984,"remoteTimeMs":502.457,"throttleTimeMs":0,"responseQueueTimeMs":0.341,"sendTimeMs":0.272,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:55.469 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:55.469 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:55.469 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:55.470 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:55.470 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 17:18:55.471 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:55.541 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:55.542 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=55) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null) 17:18:55.543 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:55.544 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=55): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:18:55.544 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful Heartbeat response 17:18:55.544 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":55,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":1.628,"requestQueueTimeMs":0.312,"localTimeMs":0.881,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.126,"sendTimeMs":0.308,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:55.973 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:55.974 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:55.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.214,"requestQueueTimeMs":0.199,"localTimeMs":1.898,"remoteTimeMs":500.685,"throttleTimeMs":0,"responseQueueTimeMs":0.153,"sendTimeMs":0.276,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:55.974 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:55.975 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:55.975 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:55.976 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:55.976 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=56) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 17:18:55.977 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:56.479 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:56.481 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=56): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:56.481 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":56,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.752,"requestQueueTimeMs":0.238,"localTimeMs":1.046,"remoteTimeMs":501.745,"throttleTimeMs":0,"responseQueueTimeMs":0.266,"sendTimeMs":0.455,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:56.481 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:56.482 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:56.482 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:56.482 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:56.482 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 17:18:56.483 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:56.986 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:56.987 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:56.987 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.894,"requestQueueTimeMs":0.248,"localTimeMs":1.625,"remoteTimeMs":501.54,"throttleTimeMs":0,"responseQueueTimeMs":0.208,"sendTimeMs":0.27,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:56.987 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:56.988 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:56.988 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:56.988 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:56.988 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 17:18:56.990 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:57.492 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:57.493 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:57.493 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:57.494 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:57.494 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:57.494 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:57.495 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=59) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 17:18:57.495 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.822,"requestQueueTimeMs":0.244,"localTimeMs":1.418,"remoteTimeMs":501.174,"throttleTimeMs":0,"responseQueueTimeMs":0.181,"sendTimeMs":0.802,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:57.500 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:58.003 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:58.004 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=59): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:58.004 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:58.004 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":59,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":505.993,"requestQueueTimeMs":2.04,"localTimeMs":1.426,"remoteTimeMs":502.121,"throttleTimeMs":0,"responseQueueTimeMs":0.131,"sendTimeMs":0.272,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:58.005 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:58.005 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:58.005 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:58.005 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 17:18:58.006 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:58.509 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:58.510 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":504.138,"requestQueueTimeMs":0.18,"localTimeMs":1.322,"remoteTimeMs":502.198,"throttleTimeMs":0,"responseQueueTimeMs":0.174,"sendTimeMs":0.262,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:58.511 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:58.511 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:58.511 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:58.511 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:58.512 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:58.512 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 17:18:58.513 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:58.542 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183 to coordinator localhost:38099 (id: 2147483646 rack: null) 17:18:58.542 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=62) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null) 17:18:58.543 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:58.544 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=62): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 17:18:58.544 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received successful Heartbeat response 17:18:58.544 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":62,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":1.467,"requestQueueTimeMs":0.209,"localTimeMs":0.901,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.046,"sendTimeMs":0.309,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:58.689 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:58.689 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=63) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 17:18:58.692 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183) unblocked 1 Heartbeat operations 17:18:58.693 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1762449533697, current time: 1762449538693,unflushed: 1 17:18:58.699 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 17:18:58.699 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 6 ms 17:18:58.700 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=63): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 17:18:58.700 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":63,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853-afe88ee5-0cc1-4178-b5f2-14bcfb895183","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37650-3","totalTimeMs":10.267,"requestQueueTimeMs":0.242,"localTimeMs":9.808,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.141,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:58.700 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 17:18:58.701 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 17:18:59.015 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:59.016 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:59.017 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:59.017 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.583,"requestQueueTimeMs":0.363,"localTimeMs":2.305,"remoteTimeMs":500.355,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.397,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.017 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:59.017 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:59.017 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:59.017 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 17:18:59.019 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:59.520 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 0 partition(s) 17:18:59.521 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[]) 17:18:59.522 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 0 response partition(s), 1 implied partition(s) 17:18:59.522 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":503.239,"requestQueueTimeMs":0.382,"localTimeMs":1.104,"remoteTimeMs":501.374,"throttleTimeMs":0,"responseQueueTimeMs":0.109,"sendTimeMs":0.268,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.522 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:59.522 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=31) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:59.522 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:59.522 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=31, topics=[], forgottenTopicsData=[], rackId='') 17:18:59.524 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 32: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 17:18:59.631 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [SASL_PLAINTEXT://localhost:38099] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:18:59.641 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Instantiated an idempotent producer. 17:18:59.656 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:18:59.656 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:18:59.656 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449539656 17:18:59.656 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Starting Kafka producer I/O thread. 17:18:59.656 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Kafka producer started 17:18:59.657 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Transition from state UNINITIALIZED to INITIALIZING 17:18:59.660 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:18:59.660 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: -1 rack: null) for sending metadata request 17:18:59.660 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.660 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: -1 rack: null) using address localhost/127.0.0.1 17:18:59.661 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.661 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.661 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:46798 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:59.661 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:46798 17:18:59.664 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 17:18:59.665 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:59.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:59.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:59.665 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Completed connection to node -1. Fetching API versions. 17:18:59.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:59.666 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:59.666 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:59.666 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:59.666 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:59.666 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to INITIAL 17:18:59.666 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to INTERMEDIATE 17:18:59.667 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:59.667 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:59.667 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:59.667 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:59.667 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to COMPLETE 17:18:59.667 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Finished authentication with no session expiration and no session re-authentication 17:18:59.667 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Successfully authenticated with localhost/127.0.0.1 17:18:59.667 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating API versions fetch from node -1. 17:18:59.667 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:59.669 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:59.670 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:46798-4","totalTimeMs":1.501,"requestQueueTimeMs":0.213,"localTimeMs":1.098,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.065,"sendTimeMs":0.125,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:59.670 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:59.670 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38099 (id: -1 rack: null) 17:18:59.670 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 17:18:59.671 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:38099 (id: -1 rack: null) with correlation ID 2 17:18:59.671 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:18:59.672 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38099, rack=null)], clusterId='Nd7IkpbZQo6_44gRDKYSkA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 17:18:59.672 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38099,"rack":null}],"clusterId":"Nd7IkpbZQo6_44gRDKYSkA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"7trtN51tR76cWI19Q3IoTQ","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38099-127.0.0.1:46798-4","totalTimeMs":1.332,"requestQueueTimeMs":0.144,"localTimeMs":1.019,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.059,"sendTimeMs":0.109,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.673 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to 7trtN51tR76cWI19Q3IoTQ 17:18:59.673 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Cluster ID: Nd7IkpbZQo6_44gRDKYSkA 17:18:59.673 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='Nd7IkpbZQo6_44gRDKYSkA', nodes={1=localhost:38099 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38099 (id: 1 rack: null)} 17:18:59.677 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 17:18:59.680 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.680 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:59.680 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.681 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.681 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:46800 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:59.681 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:46800 17:18:59.682 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:18:59.682 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:59.682 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 17:18:59.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:59.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:59.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:59.682 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:59.682 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:59.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:59.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 17:18:59.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:59.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:59.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:59.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 17:18:59.683 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:59.684 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:59.685 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:46800-4","totalTimeMs":0.707,"requestQueueTimeMs":0.155,"localTimeMs":0.341,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.051,"sendTimeMs":0.157,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:59.685 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:59.685 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 17:18:59.691 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:59.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 17:18:59.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 17:18:59.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 17:18:59.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:59.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:59.692 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:59.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:18:59.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 17:18:59.692 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1762449517050,1762449517050,0,0,0,0,0,0,15} 17:18:59.692 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002daa60000 after 1ms. 17:18:59.693 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 17:18:59.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002daa60000 17:18:59.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 17:18:59.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 17:18:59.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 17:18:59.694 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 289147096850 17:18:59.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 17:18:59.696 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 17:18:59.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 291254641044 17:18:59.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 17:18:59.696 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 253,5 replyHeader:: 253,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1762449517050,1762449539694,1,0,0,0,60,0,15} 17:18:59.697 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 17:18:59.698 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 17:18:59.700 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 17:18:59.700 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 17:18:59.700 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:38099-127.0.0.1:46800-4","totalTimeMs":14.077,"requestQueueTimeMs":0.8,"localTimeMs":0.915,"remoteTimeMs":12.071,"throttleTimeMs":0,"responseQueueTimeMs":0.087,"sendTimeMs":0.203,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.703 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 17:18:59.703 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:46798-4","totalTimeMs":30.136,"requestQueueTimeMs":1.236,"localTimeMs":28.708,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.063,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.704 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] ProducerId set to 0 with epoch 0 17:18:59.704 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Transition from state INITIALIZING to READY 17:18:59.705 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.705 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:59.706 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.706 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.706 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:46802 on /127.0.0.1:38099 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:59.706 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:46802 17:18:59.707 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 17:18:59.708 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:59.708 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Completed connection to node 1. Fetching API versions. 17:18:59.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:59.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:59.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:59.709 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:59.709 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:59.709 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:59.709 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:59.709 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to INITIAL 17:18:59.709 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to INTERMEDIATE 17:18:59.709 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:59.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:59.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:59.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:59.710 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to COMPLETE 17:18:59.710 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Finished authentication with no session expiration and no session re-authentication 17:18:59.710 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Successfully authenticated with localhost/127.0.0.1 17:18:59.710 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating API versions fetch from node 1. 17:18:59.710 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 17:18:59.712 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 17:18:59.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38099-127.0.0.1:46802-5","totalTimeMs":1.228,"requestQueueTimeMs":0.235,"localTimeMs":0.739,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.162,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:59.713 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 17:18:59.717 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 17:18:59.717 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 17:18:59.720 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 17:18:59.743 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1762449519335, current time: 1762449539743,unflushed: 3 17:18:59.745 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 17:18:59.746 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 20 ms 17:18:59.753 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:38099-127.0.0.1:46802-5","totalTimeMs":32.101,"requestQueueTimeMs":2.936,"localTimeMs":28.801,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.264,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.753 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 17:18:59.754 [data-plane-kafka-request-handler-0] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 816286608 returning 1 partition(s) 17:18:59.756 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 17:18:59.757 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 17:18:59.758 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":816286608,"sessionEpoch":31,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":816286608,"responses":[{"topicId":"7trtN51tR76cWI19Q3IoTQ","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:38099-127.0.0.1:37648-3","totalTimeMs":235.114,"requestQueueTimeMs":0.216,"localTimeMs":1.495,"remoteTimeMs":231.031,"throttleTimeMs":0,"responseQueueTimeMs":0.075,"sendTimeMs":2.295,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 17:18:59.759 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=816286608, responses=[FetchableTopicResponse(topic='', topicId=7trtN51tR76cWI19Q3IoTQ, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 17:18:59.760 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 816286608 with 1 response partition(s) 17:18:59.760 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 17:18:59.761 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:38099 (id: 1 rack: null)], epoch=0}} to node localhost:38099 (id: 1 rack: null) 17:18:59.761 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Built incremental fetch (sessionId=816286608, epoch=32) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 17:18:59.761 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:38099 (id: 1 rack: null) 17:18:59.761 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:18:59.762 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 816286608, epoch 33: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 17:18:59.774 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 17:18:59.774 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 17:18:59.776 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.776 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:59.776 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.776 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.776 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:46804 on /127.0.0.1:38099 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 17:18:59.776 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:46804 17:18:59.777 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 17:18:59.777 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 17:18:59.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 17:18:59.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 17:18:59.777 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 17:18:59.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 17:18:59.777 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 17:18:59.777 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 17:18:59.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 17:18:59.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 17:18:59.778 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 17:18:59.778 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 17:18:59.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 17:18:59.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 17:18:59.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 17:18:59.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 17:18:59.778 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 17:18:59.778 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 17:18:59.778 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 17:18:59.778 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 17:18:59.782 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 17:18:59.783 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 17:18:59.783 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 17:18:59.789 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 17:18:59.792 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 17:18:59.793 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 14ms 17:18:59.793 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:38099-127.0.0.1:46804-5","totalTimeMs":13.441,"requestQueueTimeMs":0.729,"localTimeMs":12.525,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.058,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 17:18:59.793 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38099-127.0.0.1:46804-5) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:59.794 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 17:18:59.795 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 17:18:59.795 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 17:18:59.795 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 17:18:59.796 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38099] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 17:18:59.796 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 17:18:59.796 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:46802-5 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:37646-2 17:18:59.798 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:59.798 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:37636-0 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:46798-4 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:37648-3 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:46800-4 17:18:59.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38099-127.0.0.1:37650-3 17:18:59.799 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 17:18:59.799 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 17:18:59.799 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:18:59.799 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:59.799 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node -1 disconnected. 17:18:59.802 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 17:18:59.803 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 17:18:59.807 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 17:18:59.807 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 17:18:59.808 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 17:18:59.809 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:18:59.813 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 17:18:59.815 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 17:18:59.815 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 17:18:59.816 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 17:18:59.817 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 17:18:59.818 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 17:18:59.818 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 17:18:59.820 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 17:18:59.820 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:18:59.821 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 17:18:59.821 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 17:18:59.822 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 17:18:59.822 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 17:18:59.822 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 17:18:59.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:18:59.823 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:18:59.823 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:18:59.823 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 17:18:59.823 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:18:59.823 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:18:59.823 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 66 due to node 1 being disconnected (elapsed time since creation: 62ms, elapsed time since send: 62ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=816286608, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=7trtN51tR76cWI19Q3IoTQ, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 17:18:59.823 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 17:18:59.824 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node -1 disconnected. 17:18:59.824 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 2147483646 disconnected. 17:18:59.824 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, correlationId=66) due to node 1 being disconnected 17:18:59.824 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 17:18:59.824 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 17:18:59.824 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Error sending fetch request (sessionId=816286608, epoch=32) to node 1: org.apache.kafka.common.errors.DisconnectException: null 17:18:59.825 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 17:18:59.825 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Group coordinator localhost:38099 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 17:18:59.825 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 17:18:59.825 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:18:59.825 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 17:18:59.825 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 17:18:59.826 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 17:18:59.826 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 17:18:59.826 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 17:18:59.826 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 17:18:59.826 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 17:18:59.827 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 17:18:59.827 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 17:18:59.827 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 17:18:59.827 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 17:18:59.828 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 17:18:59.828 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 17:18:59.828 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 17:18:59.828 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 17:18:59.828 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 17:18:59.829 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 17:18:59.829 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 17:18:59.829 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 17:18:59.830 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 17:18:59.830 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 17:18:59.830 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 17:18:59.869 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 17:18:59.870 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 17:18:59.870 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 17:18:59.870 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 17:18:59.873 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 17:18:59.873 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 17:18:59.873 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 17:18:59.873 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 17:18:59.874 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 17:18:59.875 [main] INFO kafka.log.LogManager - Shutting down. 17:18:59.876 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 17:18:59.876 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 17:18:59.877 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 17:18:59.877 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 17:18:59.878 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit7122242531084360278 17:18:59.880 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520213, current time: 1762449539880,unflushed: 0 17:18:59.881 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.883 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.886 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.888 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520421, current time: 1762449539888,unflushed: 0 17:18:59.889 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.889 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.889 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.890 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520316, current time: 1762449539890,unflushed: 0 17:18:59.891 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.891 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.891 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.892 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520414, current time: 1762449539892,unflushed: 0 17:18:59.893 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.893 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.893 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.893 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520323, current time: 1762449539893,unflushed: 0 17:18:59.895 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.895 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.895 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.895 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520308, current time: 1762449539895,unflushed: 0 17:18:59.896 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.896 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.896 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.897 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520428, current time: 1762449539897,unflushed: 0 17:18:59.898 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.898 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.898 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.898 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519998, current time: 1762449539898,unflushed: 0 17:18:59.899 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.900 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.900 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.900 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520399, current time: 1762449539900,unflushed: 0 17:18:59.900 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:18:59.900 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.900 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:59.901 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.901 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.901 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.901 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.901 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.902 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519922, current time: 1762449539902,unflushed: 0 17:18:59.902 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:18:59.902 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:18:59.902 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:18:59.903 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.903 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.903 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.904 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520130, current time: 1762449539904,unflushed: 0 17:18:59.905 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.905 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.905 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.906 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519903, current time: 1762449539906,unflushed: 0 17:18:59.907 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.907 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.907 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.907 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519887, current time: 1762449539907,unflushed: 0 17:18:59.908 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.909 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.909 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.909 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1762449538699, current time: 1762449539909,unflushed: 0 17:18:59.909 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.913 [log-closing-/tmp/kafka-unit7122242531084360278] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 3 ms. 17:18:59.914 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.914 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 17:18:59.915 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520282, current time: 1762449539915,unflushed: 0 17:18:59.916 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.916 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.916 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.916 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520034, current time: 1762449539916,unflushed: 0 17:18:59.918 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.918 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.918 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.918 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520008, current time: 1762449539918,unflushed: 0 17:18:59.919 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.919 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.920 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.920 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1762449539745, current time: 1762449539920,unflushed: 0 17:18:59.920 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.923 [log-closing-/tmp/kafka-unit7122242531084360278] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 2 ms. 17:18:59.923 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.923 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 17:18:59.923 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519868, current time: 1762449539923,unflushed: 0 17:18:59.924 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.925 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.925 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.925 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520155, current time: 1762449539925,unflushed: 0 17:18:59.925 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:18:59.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:18:59.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:18:59.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:18:59.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:18:59.927 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.927 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.927 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.927 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520376, current time: 1762449539927,unflushed: 0 17:18:59.927 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:18:59.928 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:18:59.928 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:18:59.928 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:18:59.949 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.949 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.949 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.950 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520174, current time: 1762449539950,unflushed: 0 17:18:59.955 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.956 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.956 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.956 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520181, current time: 1762449539956,unflushed: 0 17:18:59.957 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.958 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.958 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.958 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520016, current time: 1762449539958,unflushed: 0 17:18:59.961 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.961 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.961 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.961 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520107, current time: 1762449539961,unflushed: 0 17:18:59.964 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.964 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.964 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.965 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520238, current time: 1762449539965,unflushed: 0 17:18:59.967 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.967 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.967 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.969 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520392, current time: 1762449539969,unflushed: 0 17:18:59.970 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.970 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.970 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.971 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520407, current time: 1762449539971,unflushed: 0 17:18:59.973 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.973 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.973 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.974 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520290, current time: 1762449539974,unflushed: 0 17:18:59.976 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.976 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.976 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.976 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520073, current time: 1762449539976,unflushed: 0 17:18:59.978 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.978 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.978 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.978 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519896, current time: 1762449539978,unflushed: 0 17:18:59.979 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.979 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.979 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.979 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519984, current time: 1762449539979,unflushed: 0 17:18:59.980 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.981 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.981 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.981 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520367, current time: 1762449539981,unflushed: 0 17:18:59.982 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.982 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.982 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.982 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520165, current time: 1762449539982,unflushed: 0 17:18:59.983 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.983 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.984 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.984 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519878, current time: 1762449539984,unflushed: 0 17:18:59.985 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.985 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.985 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.985 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520196, current time: 1762449539985,unflushed: 0 17:18:59.986 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.987 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.987 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.987 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519912, current time: 1762449539987,unflushed: 0 17:18:59.988 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.988 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.988 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.988 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520205, current time: 1762449539988,unflushed: 0 17:18:59.989 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.990 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.990 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.990 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520052, current time: 1762449539990,unflushed: 0 17:18:59.991 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.991 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.991 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.992 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520254, current time: 1762449539992,unflushed: 0 17:18:59.993 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.993 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.993 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.993 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520384, current time: 1762449539993,unflushed: 0 17:18:59.994 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.994 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.995 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.995 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520435, current time: 1762449539995,unflushed: 0 17:18:59.996 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.996 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.996 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.996 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519940, current time: 1762449539996,unflushed: 0 17:18:59.997 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.997 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.997 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.997 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520042, current time: 1762449539997,unflushed: 0 17:18:59.999 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:18:59.999 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 17:18:59.999 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:18:59.999 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520350, current time: 1762449539999,unflushed: 0 17:19:00.000 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.000 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.000 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.000 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520063, current time: 1762449540000,unflushed: 0 17:19:00.002 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.002 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.002 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.002 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520298, current time: 1762449540002,unflushed: 0 17:19:00.003 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.004 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.004 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.004 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.004 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520274, current time: 1762449540004,unflushed: 0 17:19:00.005 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.006 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.006 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.006 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449519931, current time: 1762449540006,unflushed: 0 17:19:00.007 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.007 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.007 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.007 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520025, current time: 1762449540007,unflushed: 0 17:19:00.008 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.008 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.008 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.008 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit7122242531084360278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1762449520247, current time: 1762449540008,unflushed: 0 17:19:00.009 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit7122242531084360278] Closing log 17:19:00.009 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 17:19:00.010 [log-closing-/tmp/kafka-unit7122242531084360278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit7122242531084360278/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 17:19:00.010 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit7122242531084360278 17:19:00.014 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit7122242531084360278 17:19:00.019 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit7122242531084360278 17:19:00.020 [main] INFO kafka.log.LogManager - Shutdown complete. 17:19:00.021 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 17:19:00.021 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 17:19:00.021 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 17:19:00.021 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 17:19:00.022 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 17:19:00.022 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:19:00.023 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 17:19:00.024 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 17:19:00.024 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 17:19:00.024 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 17:19:00.024 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 17:19:00.026 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 17:19:00.026 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 17:19:00.026 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 17:19:00.026 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 17:19:00.027 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 17:19:00.027 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 17:19:00.027 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000002daa60000 17:19:00.027 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000002daa60000 17:19:00.028 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 291254641044 17:19:00.028 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292392059612 17:19:00.028 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 289571598059 17:19:00.028 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 286729851331 17:19:00.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.029 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.029 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.029 [ProcessThread(sid:0 cport:46233):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 285938049344 17:19:00.030 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:00.030 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:00.030 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.030 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002daa60000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000002daa60000 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000002daa60000 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 17:19:00.031 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000002daa60000 17:19:00.031 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000002daa60000 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 285938049344 17:19:00.031 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 17:19:00.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002daa60000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 17:19:00.031 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:19:00.031 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000002daa60000 17:19:00.032 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002daa60000 17:19:00.032 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000002daa60000 17:19:00.032 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 17:19:00.032 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 17:19:00.032 [main-SendThread(127.0.0.1:46233)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002daa60000, packet:: clientPath:null serverPath:null finished:false header:: 254,-11 replyHeader:: 254,141,0 request:: null response:: null 17:19:00.033 [NIOWorkerThread-8] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:37820 which had sessionid 0x1000002daa60000 17:19:00.032 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000002daa60000 17:19:00.033 [main-SendThread(127.0.0.1:46233)] WARN org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x1000002daa60000. org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1000002daa60000, likely server has closed socket at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) 17:19:00.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.054 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:00.054 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:00.054 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.130 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.130 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.133 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 17:19:00.135 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000002daa60000 17:19:00.135 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000002daa60000 closed 17:19:00.137 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 17:19:00.137 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 17:19:00.140 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 17:19:00.140 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 17:19:00.141 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 17:19:00.141 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 17:19:00.141 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 17:19:00.141 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 17:19:00.142 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 17:19:00.142 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 17:19:00.142 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 17:19:00.142 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 17:19:00.142 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 17:19:00.143 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 17:19:00.155 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.167 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 17:19:00.167 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:19:00.167 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:19:00.168 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:19:00.169 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 17:19:00.169 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 17:19:00.169 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 17:19:00.169 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 17:19:00.170 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 17:19:00.170 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 17:19:00.171 [NIOServerCxnFactory.SelectorThread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 17:19:00.171 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:46233] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 17:19:00.172 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 17:19:00.172 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 17:19:00.172 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 17:19:00.172 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 17:19:00.172 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 17:19:00.172 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 17:19:00.172 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 17:19:00.173 [ProcessThread(sid:0 cport:46233):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 17:19:00.173 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 17:19:00.174 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 17:19:00.174 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit4057318268231248889/version-2/log.1 17:19:00.174 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit4057318268231248889/version-2/log.1 17:19:00.178 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception java.io.EOFException: Failed to read /tmp/kafka-unit4057318268231248889/version-2/log.1 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:19:00.178 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 17:19:00.183 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 17:19:00.183 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.751 s - in org.onap.sdc.utils.SdcKafkaTest [INFO] Running org.onap.sdc.utils.NotificationSenderTest 17:19:00.289 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.290 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.291 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.291 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.291 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.292 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.292 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.292 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.293 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.293 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.295 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:00.295 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:00.295 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:00.295 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:00.296 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.296 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.296 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.408 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.409 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.409 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.445 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:00.445 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:19:00.446 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:19:00.460 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.509 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.510 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.510 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.561 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.610 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.610 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.611 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.662 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.712 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.713 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:00.714 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:00.714 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.763 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:00.763 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:00.763 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:00.764 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:00.764 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:00.764 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:00.765 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:00.765 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:00.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.865 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.871 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 17:19:00.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:00.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:00.916 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:00.966 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.017 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.068 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.118 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.169 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.217 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.219 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.269 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.320 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.370 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.418 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.421 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.457 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:01.457 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:19:01.457 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:19:01.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:01.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:01.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:01.472 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:01.472 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:01.472 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:01.473 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:01.473 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:01.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:01.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:01.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:01.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:01.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:01.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:01.520 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:01.520 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:01.521 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.573 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.624 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.674 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.722 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.722 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.725 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.775 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.826 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.876 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.923 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:01.923 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:01.927 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:01.977 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.023 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.024 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.028 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.078 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.129 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.179 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.224 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.224 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.229 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.280 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.325 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.325 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.330 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.380 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.425 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.425 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.431 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.458 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. org.apache.kafka.common.KafkaException: null 17:19:02.475 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:02.475 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 17:19:02.475 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 17:19:02.476 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status org.apache.kafka.common.KafkaException: null at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:47) at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:83) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.293 s - in org.onap.sdc.utils.NotificationSenderTest 17:19:02.481 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest 17:19:02.481 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:02.481 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:02.482 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:02.482 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:02.483 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:02.483 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:02.483 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Running org.onap.sdc.utils.GeneralUtilsTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.utils.GeneralUtilsTest [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 17:19:02.601 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.607 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.609 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:02.711 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:02.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:02.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:02.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:02.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:02.714 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:02.714 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:02.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.762 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.945 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:02.946 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:02.947 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:02.965 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:02.965 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:02.972 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:02.997 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.047 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.047 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.047 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.071 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.098 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.148 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.148 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.148 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.171 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.199 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.248 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.249 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.249 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.270 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.299 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.349 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.349 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.350 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.371 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.400 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.450 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.450 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.450 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.470 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.501 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:03.501 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:03.501 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:03.502 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:03.502 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:03.504 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:03.504 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:03.505 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:03.550 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:03.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:03.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:03.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:03.551 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:03.552 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:03.552 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:03.552 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:03.553 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.571 [pool-8-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.604 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.653 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.653 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.655 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.671 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.705 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.753 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.753 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.755 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.771 [pool-8-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.806 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.854 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.854 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.856 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.871 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.907 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.955 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:03.955 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:03.957 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:03.971 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:03.978 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:03.978 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:03.980 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.007 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.055 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.055 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.058 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.079 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.108 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.156 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.156 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.159 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.180 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.180 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:04.180 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "bugabuga" : "xyz", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "buga.bug", "artifactType" : "BUGA_BUGA", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:19:04.196 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:19:04.209 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.256 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.256 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.259 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.279 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.309 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:04.310 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:04.310 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:04.310 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:04.310 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:04.311 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:04.311 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:04.311 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:04.357 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.357 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.379 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.412 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.457 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.457 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.462 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.479 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.513 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.558 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:04.558 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:04.558 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:04.558 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:04.558 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:04.559 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:04.559 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:04.559 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:04.560 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.563 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.579 [pool-9-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.613 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.660 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.660 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.664 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.679 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.714 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.760 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.760 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.764 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.780 [pool-9-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.815 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.861 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.861 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.865 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.879 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.916 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.961 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:04.961 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:04.966 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:04.979 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:04.986 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:04.986 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:04.989 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.017 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.067 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.088 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.117 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.162 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.162 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.168 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.188 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.189 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:05.189 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1", "relatedArtifacts" : [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1", "relatedArtifacts" : [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:19:05.201 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifacts": [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:19:05.218 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.269 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.271 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.271 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.288 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.319 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.369 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.372 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:05.372 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:05.372 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:05.373 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:05.373 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:05.375 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:05.375 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:05.375 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:05.375 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.389 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.420 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:05.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:05.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:05.471 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:05.472 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:05.472 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:05.473 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:05.473 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:05.476 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.477 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.488 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.574 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.577 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.577 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.589 [pool-10-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.624 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.675 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.678 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.678 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.688 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.725 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.775 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.778 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.779 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.788 [pool-10-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.826 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.876 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.879 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.879 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.889 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.927 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.977 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:05.979 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:05.980 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:05.988 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:05.999 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:05.999 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:06.002 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.028 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.078 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.080 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.080 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.101 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.129 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.179 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.181 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.181 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.202 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.202 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:06.202 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:19:06.207 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:19:06.230 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.280 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.281 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.281 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.301 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.331 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.381 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:06.381 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:06.381 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:06.381 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:06.382 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:06.382 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.382 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.382 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:06.382 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:06.382 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:06.402 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.482 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.483 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.483 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.501 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.533 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.583 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:06.583 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:06.584 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:06.584 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.584 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:06.584 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:06.585 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:06.585 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:06.585 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:06.586 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.601 [pool-11-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.634 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.684 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.686 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.687 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.701 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.735 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.785 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.787 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.788 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.801 [pool-11-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.835 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.886 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.888 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.888 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:06.901 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:06.936 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.987 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:06.989 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:06.989 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.001 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.007 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:07.007 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:07.010 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.037 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.087 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.090 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.091 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.109 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.138 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.188 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.191 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.192 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.210 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.210 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:07.210 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 17:19:07.214 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 17:19:07.214 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null java.lang.NullPointerException: null at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:57) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:07.238 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.289 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.292 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.293 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.310 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.339 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.389 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.393 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.393 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.410 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.440 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.490 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:07.490 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:07.490 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:07.491 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:07.491 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:07.492 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:07.492 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:07.492 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:07.494 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.494 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.509 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.593 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.610 [pool-12-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.643 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.693 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:07.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:07.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:07.697 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:07.697 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:07.698 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:07.698 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:07.698 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:07.698 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.710 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.743 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.794 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.810 [pool-12-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.844 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.894 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.899 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:07.900 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:07.910 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:07.945 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:07.995 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.000 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.000 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.009 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.022 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:08.022 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:08.026 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.045 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.096 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.101 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.124 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.146 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.196 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.201 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.202 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.225 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.225 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:08.225 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 17:19:08.232 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 17:19:08.247 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.297 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.302 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.302 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.324 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.347 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.398 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.402 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.402 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.425 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.448 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:08.448 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:08.448 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:08.448 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:08.449 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:08.449 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:08.449 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:08.450 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:08.503 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.503 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.524 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.549 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.599 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.603 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.603 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.625 [pool-13-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.650 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.700 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.703 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.703 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.724 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.750 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.800 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.803 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:08.804 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.825 [pool-13-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.851 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.902 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:08.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:08.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:08.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:08.905 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:08.905 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:08.905 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:08.906 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:08.906 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:08.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:08.924 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:08.952 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.003 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.006 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.007 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.024 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.030 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 17:19:09.030 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:09.033 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.053 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.104 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.107 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.107 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.132 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.154 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.205 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.208 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.208 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.232 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.233 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 17:19:09.233 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "serviceArtifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ], "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ] } 17:19:09.241 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } } ] } 17:19:09.255 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.305 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:09.306 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:09.306 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:09.306 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:09.306 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:09.307 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:09.307 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:09.307 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:09.308 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.309 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.332 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.407 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.409 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.409 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.432 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.458 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.508 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.510 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.510 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.532 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.558 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.609 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.611 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.611 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.632 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.659 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.709 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.732 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.760 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.810 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.813 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:09.813 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:09.813 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:09.814 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:09.814 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:09.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:09.815 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:09.815 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:09.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.832 [pool-14-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.860 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.911 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:09.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:09.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:09.932 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 17:19:09.961 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.011 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.031 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.539 s - in org.onap.sdc.impl.NotificationConsumerTest [INFO] Running org.onap.sdc.impl.HeatParserTest 17:19:10.039 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 17:19:10.062 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.112 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.125 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null in 'string', line 1, column 1: just text ^ at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) ... 76 common frames omitted 17:19:10.125 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 17:19:10.125 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 17:19:10.145 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances parameters: image_name_1: type: string label: Image Name description: SCOIMAGE Specify an image name for instance1 default: cirros-0.3.1-x86_64 image_name_2: type: string label: Image Name description: SCOIMAGE Specify an image name for instance2 default: cirros-0.3.1-x86_64 network_id: type: string label: Network ID description: SCONETWORK Network to be used for the compute instance hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters. - range: { min: 6, max: 8 } description: Range description - allowed_values: - m1.small - m1.medium - m1.large description: Allowed values description - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only. - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character. - custom_constraint: nova.keypair description: Custom description resources: my_instance1: type: OS::Nova::Server properties: image: { get_param: image_name_1 } flavor: m1.small networks: - network : { get_param : network_id } my_instance2: type: OS::Nova::Server properties: image: { get_param: image_name_2 } flavor: m1.tiny networks: - network : { get_param : network_id } 17:19:10.162 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:10.163 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.163 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:10.163 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.163 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.163 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.164 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Node 1 disconnected. 17:19:10.164 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:10.217 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 17:19:10.228 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 17:19:10.229 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances 17:19:10.230 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 s - in org.onap.sdc.impl.HeatParserTest 17:19:10.264 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Running org.onap.sdc.impl.SerializationTest 17:19:10.315 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.318 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.365 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 s - in org.onap.sdc.impl.SerializationTest [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Running org.onap.sdc.impl.DistributionClientTest 17:19:10.409 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.415 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.417 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:19:10.417 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:19:10.418 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@4358fb67 17:19:10.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.422 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:19:10.425 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Instantiated an idempotent producer. 17:19:10.428 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:10.428 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:10.428 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449550428 17:19:10.428 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Starting Kafka producer I/O thread. 17:19:10.428 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Transition from state UNINITIALIZED to INITIALIZING 17:19:10.428 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.428 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Kafka producer started 17:19:10.428 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 17:19:10.429 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.429 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.429 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.429 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.429 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.429 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 17:19:10.431 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.431 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Node -1 disconnected. 17:19:10.431 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.431 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.431 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.432 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.434 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.434 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:19:10.434 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.435 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:19:10.435 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.435 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:19:10.435 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.436 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:19:10.436 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.436 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 17:19:10.436 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.436 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 17:19:10.437 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.437 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 17:19:10.437 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.437 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 17:19:10.438 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.438 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 17:19:10.438 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.438 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 17:19:10.438 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.439 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 17:19:10.439 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.439 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized isUseHttpsWithSDC set to true 17:19:10.440 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.466 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.481 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 8d334876-1327-4c99-ae38-0fc0e521c58a url= /sdc/v1/artifactTypes 17:19:10.481 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 17:19:10.516 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.519 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.532 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.533 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.534 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Node -1 disconnected. 17:19:10.534 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.534 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.534 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.537 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: badhost: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:19:10.538 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5e4ebc15 17:19:10.538 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:19:10.538 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:19:10.539 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.565 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 2db18275-500b-4c00-b166-e6652ba84605 url= /sdc/v1/artifactTypes 17:19:10.565 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 17:19:10.569 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 98 common frames omitted 17:19:10.569 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@12335ae5 17:19:10.569 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:19:10.569 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:19:10.570 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.570 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.570 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.572 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.573 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:19:10.573 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:19:10.573 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@c1dedda 17:19:10.573 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:19:10.574 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Instantiated an idempotent producer. 17:19:10.577 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:10.577 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:10.577 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449550577 17:19:10.577 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 17:19:10.577 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Starting Kafka producer I/O thread. 17:19:10.577 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Transition from state UNINITIALIZED to INITIALIZING 17:19:10.577 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.578 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.578 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.578 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.578 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.578 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.578 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.580 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 17:19:10.580 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.580 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.580 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Node -1 disconnected. 17:19:10.580 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.580 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.580 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.581 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.581 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.585 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.585 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.586 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:19:10.586 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 17:19:10.586 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.586 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.587 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.592 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= ccc3f5eb-67e0-4cba-9030-30ab59c723b7 url= /sdc/v1/artifactTypes 17:19:10.592 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 17:19:10.620 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.620 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.621 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.634 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.634 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.634 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.634 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.635 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.635 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.636 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.636 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Node -1 disconnected. 17:19:10.636 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.636 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.636 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.671 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.681 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.682 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.683 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Node -1 disconnected. 17:19:10.683 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.683 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.683 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.721 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.721 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.722 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.737 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.737 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.737 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Give up sending metadata request since no node is available 17:19:10.772 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.783 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.783 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.784 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.784 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.784 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.784 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.785 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.785 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Node -1 disconnected. 17:19:10.785 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.785 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.785 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.787 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.787 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Give up sending metadata request since no node is available 17:19:10.796 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:19:10.797 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@1d0fc433 17:19:10.797 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:19:10.797 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:19:10.797 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.800 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 199dca84-bb7c-4aa4-8425-daa001562013 url= /sdc/v1/artifactTypes 17:19:10.800 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 17:19:10.802 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$cM5ffwfS.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 17:19:10.802 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@68496e76 17:19:10.802 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 17:19:10.803 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 17:19:10.803 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.803 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.805 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.805 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.806 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:19:10.807 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:19:10.807 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:19:10.807 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:19:10.807 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.807 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 17:19:10.809 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 17:19:10.809 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 17:19:10.809 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 17:19:10.810 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 17:19:10.810 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 17:19:10.810 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@4aeced7a 17:19:10.810 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:19:10.811 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Instantiated an idempotent producer. 17:19:10.813 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:10.814 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:10.814 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449550813 17:19:10.814 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Starting Kafka producer I/O thread. 17:19:10.814 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Kafka producer started 17:19:10.814 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Transition from state UNINITIALIZED to INITIALIZING 17:19:10.814 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.814 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.814 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.815 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.815 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.815 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.817 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.817 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Node -1 disconnected. 17:19:10.817 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.817 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.817 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initialize connection to node localhost:38099 (id: 1 rack: null) for sending metadata request 17:19:10.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Initiating connection to node localhost:38099 (id: 1 rack: null) using address localhost/127.0.0.1 17:19:10.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 17:19:10.822 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Node 1 disconnected. 17:19:10.822 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38099) could not be established. Broker may not be available. 17:19:10.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.822 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.837 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.837 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Give up sending metadata request since no node is available Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 17:19:10.849 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.851 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:19:10.851 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 17:19:10.852 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 17:19:10.852 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.446 s - in org.onap.sdc.impl.DistributionClientTest 17:19:10.873 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.885 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.886 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.886 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Give up sending metadata request since no node is available 17:19:10.888 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.888 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.888 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.888 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.888 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.889 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.889 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Node -1 disconnected. 17:19:10.889 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.889 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.889 [kafka-producer-network-thread | mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-4d9d7860-5131-43aa-b300-40227d3b9818] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.918 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.918 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 17:19:10.918 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 17:19:10.918 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 17:19:10.918 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Set SASL client state to SEND_APIVERSIONS_REQUEST 17:19:10.919 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 17:19:10.919 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.920 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Node -1 disconnected. 17:19:10.920 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 17:19:10.920 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 17:19:10.920 [kafka-producer-network-thread | mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-6f9654ca-3a16-4935-9506-253f4a8a3c7b] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 17:19:10.922 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] Give up sending metadata request since no node is available 17:19:10.922 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-b612c5ce-3a75-4820-8832-3a77f0ced853, groupId=mso-group] No broker available to send FindCoordinator request 17:19:10.923 [kafka-producer-network-thread | mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d853aaee-6c93-4833-94cb-ee75441fb5fe] Give up sending metadata request since no node is available 17:19:10.936 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 17:19:10.936 [kafka-producer-network-thread | mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-7faea11d-4cbe-4a42-9c61-27fa7657f711] Give up sending metadata request since no node is available [INFO] [INFO] Results: [INFO] [INFO] Tests run: 72, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-client' with 48 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.sdc.api.consumer... Loading source files for package org.onap.sdc.api... Loading source files for package org.onap.sdc.api.notification... Loading source files for package org.onap.sdc.api.results... Loading source files for package org.onap.sdc.http... Loading source files for package org.onap.sdc.utils... Loading source files for package org.onap.sdc.utils.kafka... Loading source files for package org.onap.sdc.utils.heat... Loading source files for package org.onap.sdc.impl... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/serialized-form.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ [INFO] Building sdc-distribution-ci 2.1.2-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.test.core.service.ClientInitializerTest EnvironmentVariableExtension: This extension uses reflection to mutate JDK-internal state, which is fragile. Check the Javadoc or documentation for more details. 17:19:17.191 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/home/jenkins/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /home/jenkins/.testcontainers.properties (No such file or directory) 17:19:17.199 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor') 17:19:18.156 [main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 17:19:18.169 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost 17:19:18.221 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: Server Version: 20.10.18 API Version: 1.41 Operating System: Ubuntu 18.04.6 LTS Total Memory: 32167 MB 17:19:18.260 [main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling docker image: testcontainers/ryuk:0.3.3. Please be patient; this may take some time but only needs to be done once. 17:19:18.270 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 17:19:18.869 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Starting to pull image 17:19:18.904 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 17:19:19.347 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (326 KB/? MB) 17:19:19.398 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (326 KB/? MB) 17:19:19.399 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 0 extracted, (326 KB/5 MB) 17:19:19.708 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 1 extracted, (2 MB/5 MB) 17:19:19.992 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 2 extracted, (2 MB/5 MB) 17:19:20.325 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 3 extracted, (5 MB/5 MB) 17:19:20.521 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pull complete. 3 layers, pulled in 1s (downloaded 5 MB at 5 MB/s) 17:19:22.771 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 17:19:22.771 [main] INFO org.testcontainers.DockerClientFactory - Checking the system... 17:19:22.771 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0 17:19:22.853 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker environment should have more than 2GB free disk space 17:19:22.859 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once. 17:19:23.311 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Starting to pull image 17:19:23.312 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 17:19:23.462 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (1 KB/? MB) 17:19:23.782 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (20 MB/? MB) 17:19:24.526 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (96 MB/? MB) 17:19:25.019 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (166 MB/? MB) 17:19:25.181 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (190 MB/? MB) 17:19:25.247 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (196 MB/? MB) 17:19:25.304 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (203 MB/? MB) 17:19:25.378 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (203 MB/? MB) 17:19:25.555 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 0 extracted, (217 MB/? MB) 17:19:25.723 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 1 extracted, (217 MB/? MB) 17:19:25.927 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 2 extracted, (225 MB/? MB) 17:19:26.911 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 2 extracted, (329 MB/? MB) 17:19:27.572 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 2 extracted, (364 MB/370 MB) 17:19:31.453 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 3 extracted, (364 MB/370 MB) 17:19:31.750 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 4 extracted, (366 MB/370 MB) 17:19:31.889 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 5 extracted, (366 MB/370 MB) 17:19:32.289 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 6 extracted, (368 MB/370 MB) 17:19:32.492 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 7 extracted, (368 MB/370 MB) 17:19:32.644 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 8 extracted, (368 MB/370 MB) 17:19:32.821 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 9 extracted, (368 MB/370 MB) 17:19:33.703 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 10 extracted, (370 MB/370 MB) 17:19:33.993 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 11 extracted, (370 MB/370 MB) 17:19:34.160 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pull complete. 11 layers, pulled in 10s (downloaded 370 MB at 37 MB/s) 17:19:34.169 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Creating container for image: confluentinc/cp-kafka:6.2.1 17:19:40.831 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 is starting: c8f944d44e684802064ab90fc35a9fb478de9add3e139aaf6744252bbfd49b47 17:19:45.709 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 started in PT22.853433S 17:19:47.644 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling docker image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master. Please be patient; this may take some time but only needs to be done once. 17:19:47.645 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: nexus3.onap.org:10001/onap/onap-component-mock-sdc:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 17:19:48.403 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Starting to pull image 17:19:48.405 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 17:19:48.903 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 0 extracted, (62 KB/5 MB) 17:19:49.079 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 1 extracted, (5 MB/5 MB) 17:19:49.157 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Creating container for image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master 17:19:49.563 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master is starting: 544d8699d228dd9fc96f4c31965ceeee48b7b169361cbf51dbfe6c00ed9118e4 17:19:50.070 [main] INFO org.testcontainers.containers.wait.strategy.HttpWaitStrategy - /frosty_goldwasser: Waiting for 60 seconds for URL: http://localhost:49155/sdc/v1/artifactTypes (where port 49155 maps to container port 30206) 17:19:50.084 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master started in PT2.44233S 17:19:51.202 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:19:51.295 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Instantiated an idempotent producer. 17:19:51.339 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 17:19:51.377 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:51.377 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:51.377 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449591374 17:19:51.382 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client initialized successfully 17:19:51.382 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 17:19:51.382 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 17:19:51.399 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [localhost:43219] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = noapp group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:19:51.453 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:51.453 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:51.453 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449591453 17:19:51.454 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Subscribed to topic(s): SDC-DIST-NOTIF-TOPIC 17:19:51.457 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client started successfully 17:19:51.457 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 17:19:51.457 [pool-1-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 17:19:51.912 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Cluster ID: tMsxq9PXRKmhmJFYunQS0Q 17:19:51.913 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 2 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:51.913 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Cluster ID: tMsxq9PXRKmhmJFYunQS0Q 17:19:51.914 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] ProducerId set to 0 with epoch 0 17:19:52.031 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 4 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.135 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 6 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.237 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 8 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.246 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Discovered group coordinator localhost:43219 (id: 2147483646 rack: null) 17:19:52.256 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] (Re-)joining group 17:19:52.292 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Request joining group due to: need to re-join with the given member-id: dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0 17:19:52.292 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:19:52.292 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] (Re-)joining group 17:19:52.311 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Successfully joined group with generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0', protocol='range'} 17:19:52.339 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 13 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.341 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Finished assignment for group at generation 1: {dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0=Assignment(partitions=[])} 17:19:52.391 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Successfully synced group in generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0', protocol='range'} 17:19:52.391 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Notifying assignor about the new Assignment(partitions=[]) 17:19:52.391 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Adding newly assigned partitions: 17:19:52.441 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 15 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.459 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [PLAINTEXT://localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:19:52.461 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Instantiated an idempotent producer. 17:19:52.468 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 17:19:52.469 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 17:19:52.469 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1762449592468 17:19:52.545 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Error while fetching metadata with correlation id 16 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 17:19:52.586 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {SDC-DIST-NOTIF-TOPIC=LEADER_NOT_AVAILABLE} 17:19:52.586 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: tMsxq9PXRKmhmJFYunQS0Q 17:19:52.587 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 17:19:52.650 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to yuzRy_YjSReqtVCnn_DZhA 17:19:52.652 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Request joining group due to: cached metadata has changed from (version6: {}) at the beginning of the rebalance to (version9: {SDC-DIST-NOTIF-TOPIC=1}) 17:19:52.653 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Revoke previously assigned partitions 17:19:52.654 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] (Re-)joining group 17:19:52.660 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Successfully joined group with generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0', protocol='range'} 17:19:52.661 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Finished assignment for group at generation 2: {dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0=Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0])} 17:19:52.666 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Successfully synced group in generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80-6cfc888d-636c-4b6a-ba9b-0beedd1d91f0', protocol='range'} 17:19:52.667 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Notifying assignor about the new Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0]) 17:19:52.670 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Adding newly assigned partitions: SDC-DIST-NOTIF-TOPIC-0 17:19:52.684 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Found no committed offset for partition SDC-DIST-NOTIF-TOPIC-0 17:19:52.706 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Resetting offset for partition SDC-DIST-NOTIF-TOPIC-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43219 (id: 1 rack: null)], epoch=0}}. 17:19:52.707 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to yuzRy_YjSReqtVCnn_DZhA 17:19:52.780 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 17:19:52.785 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 17:19:52.793 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 17:19:52.794 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 17:19:52.794 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered 17:19:52.796 [main] INFO org.onap.test.core.service.ClientInitializerTest - Waiting for artifacts 17:19:52.823 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:52.824 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/k8s-tca-clamp-policy-05082019.yaml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:52.868 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Error while fetching metadata with correlation id 4 : {SDC-DIST-STATUS-TOPIC=LEADER_NOT_AVAILABLE} 17:19:52.972 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Resetting the last seen epoch of partition SDC-DIST-STATUS-TOPIC-0 to 0 since the associated topicId changed from null to O7uuVJzgQ_2p-qmxLcOgfg 17:19:53.976 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:53.976 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vf-license-model.xml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:54.979 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:54.979 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/base_template.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:55.981 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:55.982 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb_cds68b6da5968e40_modules.json", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:56.985 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:56.985 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:57.987 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:57.987 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vdns.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:58.988 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:58.989 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vendor-license-model.xml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:19:59.990 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:19:59.991 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:00.992 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:00.992 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:01.993 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:01.993 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vpkg.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:02.995 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:02.995 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:03.996 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:03.997 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:04.998 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:04.998 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-template.yml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:05.999 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 17:20:05.999 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1762449591457, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-csar.csar", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 17:20:07.002 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 17:20:07.002 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Distrubuted service information 17:20:07.002 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service UUID: d2192fd5-6ba4-40d2-9078-e3642d9175ee 17:20:07.002 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service name: demoVLB_CDS 17:20:07.002 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service resources: 17:20:07.003 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Resource: vLB_CDS 68b6da59-68e4 17:20:07.003 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Artifacts: 17:20:07.003 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vpkg.yaml 17:20:07.003 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vlb.yaml 17:20:07.003 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vdns.yaml 17:20:07.004 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: base_template.yaml 17:20:07.004 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 17:20:07.004 [pool-1-thread-1] INFO org.onap.test.core.service.ArtifactsDownloader - Downloading artifacts... 17:20:07.011 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 17:20:07.013 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@7e3512bf 17:20:07.020 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 17:20:07.023 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 17:20:07.025 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@62744962 17:20:07.025 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 17:20:07.027 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 17:20:07.028 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@3cda61ca 17:20:07.028 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 17:20:07.030 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 17:20:07.031 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5db0f87d 17:20:07.031 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 17:20:07.040 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 17:20:07.040 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client stopped successfully 17:20:07.040 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 17:20:07.431 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Node 1 disconnected. 17:20:07.434 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Node -1 disconnected. 17:20:07.462 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Node 1 disconnected. 17:20:07.463 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Node -1 disconnected. 17:20:07.463 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Node 2147483646 disconnected. 17:20:07.463 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Group coordinator localhost:43219 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 17:20:07.537 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Node 1 disconnected. 17:20:07.537 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 17:20:07.566 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Node 1 disconnected. 17:20:07.566 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 17:20:07.668 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Node 1 disconnected. 17:20:07.669 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-d97ca48c-6dc7-4c91-8b35-3fcc08091f80, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 17:20:07.689 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Node 1 disconnected. 17:20:07.689 [kafka-producer-network-thread | dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-e75c1254-b86f-4727-a222-aefaee0f73a7] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.911 s - in org.onap.test.core.service.ClientInitializerTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-ci' with 9 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.test.core.service... Loading source files for package org.onap.test.core.config... Loading source files for package org.onap.test.it... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for sdc-sdc-distribution-client 2.1.2-SNAPSHOT: [INFO] [INFO] sdc-sdc-distribution-client ........................ SUCCESS [ 12.390 s] [INFO] sdc-distribution-client ............................ SUCCESS [ 53.409 s] [INFO] sdc-distribution-ci ................................ SUCCESS [ 55.389 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:03 min [INFO] Finished at: 2025-11-06T17:20:10Z [INFO] ------------------------------------------------------------------------ $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2125 killed; [ssh-agent] Stopped. [PostBuildScript] - [INFO] Executing post build scripts. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins5215117388551572917.sh ---> sysstat.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins2130153477037482833.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + mkdir -p /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins7208380383431371649.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-XozJ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-XozJ/bin to PATH INFO: Running in OpenStack, capturing instance metadata [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins2809912781380419542.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config2713355366047816643tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins13895117739869646593.sh ---> create-netrc.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins4287519295899471917.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-XozJ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-XozJ/bin to PATH [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins7488899662911037956.sh ---> sudo-logs.sh Archiving 'sudo' log.. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins6182686865344614451.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-XozJ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-XozJ/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash -l /tmp/jenkins14857238234523252022.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-XozJ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-XozJ/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-master-integration-pairwise/1239 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-10876 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 712K 3.2G 1% /run /dev/vda1 155G 11G 145G 8% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 881 28156 0 3129 30834 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:cf:8a:b0 brd ff:ff:ff:ff:ff:ff inet 10.30.107.81/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86080sec preferred_lft 86080sec inet6 fe80::f816:3eff:fecf:8ab0/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:f8:de:b0:6b brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:f8ff:fede:b06b/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-10876) 11/06/25 _x86_64_ (8 CPU) 17:15:37 LINUX RESTART (8 CPU) 17:16:01 tps rtps wtps bread/s bwrtn/s 17:17:03 256.92 68.11 188.82 4555.11 90477.45 17:18:01 103.96 0.21 103.75 27.41 69486.36 17:19:01 149.65 21.38 128.27 764.15 32237.39 17:20:01 87.90 4.60 83.30 610.96 29795.03 Average: 149.98 23.76 126.22 1501.23 55384.71 17:16:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:17:03 30325720 31689620 2613492 7.93 50068 1635968 1434120 4.22 853172 1494064 54016 17:18:01 29996520 31648968 2942692 8.93 73976 1885100 1483376 4.36 909440 1733536 221668 17:19:01 28187180 30014464 4752032 14.43 84588 2041436 3147372 9.26 2582092 1850592 940 17:20:01 26914900 29642132 6024312 18.29 104332 2893352 6139464 18.06 3065596 2559760 696 Average: 28856080 30748796 4083132 12.40 78241 2113964 3051083 8.98 1852575 1909488 69330 17:16:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:17:03 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:17:03 ens3 383.19 266.17 1206.05 64.32 0.00 0.00 0.00 0.00 17:17:03 lo 1.53 1.53 0.17 0.17 0.00 0.00 0.00 0.00 17:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:18:01 ens3 49.04 35.93 928.95 8.34 0.00 0.00 0.00 0.00 17:18:01 lo 0.69 0.69 0.07 0.07 0.00 0.00 0.00 0.00 17:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:19:01 ens3 778.62 627.21 2144.30 194.61 0.00 0.00 0.00 0.00 17:19:01 lo 17.58 17.58 2.30 2.30 0.00 0.00 0.00 0.00 17:20:01 vethb9dc5dc 0.15 0.33 0.02 0.04 0.00 0.00 0.00 0.00 17:20:01 docker0 1.90 2.70 0.39 0.50 0.00 0.00 0.00 0.00 17:20:01 ens3 468.96 370.09 6942.95 64.95 0.00 0.00 0.00 0.00 17:20:01 lo 8.68 8.68 1.31 1.31 0.00 0.00 0.00 0.00 Average: vethb9dc5dc 0.04 0.08 0.00 0.01 0.00 0.00 0.00 0.00 Average: docker0 0.48 0.68 0.10 0.13 0.00 0.00 0.00 0.00 Average: ens3 422.97 327.20 2820.75 83.67 0.00 0.00 0.00 0.00 Average: lo 7.17 7.17 0.97 0.97 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-10876) 11/06/25 _x86_64_ (8 CPU) 17:15:37 LINUX RESTART (8 CPU) 17:16:01 CPU %user %nice %system %iowait %steal %idle 17:17:03 all 6.89 0.00 1.00 11.72 0.04 80.35 17:17:03 0 4.93 0.00 1.14 24.29 0.03 69.61 17:17:03 1 9.59 0.00 1.07 17.50 0.03 71.81 17:17:03 2 5.18 0.00 0.40 1.59 0.03 92.80 17:17:03 3 2.66 0.00 0.57 3.42 0.02 93.34 17:17:03 4 9.75 0.00 2.19 1.96 0.03 86.07 17:17:03 5 1.98 0.00 0.62 33.75 0.05 63.60 17:17:03 6 9.27 0.00 0.77 7.75 0.05 82.16 17:17:03 7 11.73 0.00 1.26 3.55 0.03 83.42 17:18:01 all 5.96 0.00 0.42 8.50 0.02 85.10 17:18:01 0 0.17 0.00 0.10 6.79 0.00 92.93 17:18:01 1 4.75 0.00 0.34 0.53 0.02 94.35 17:18:01 2 0.07 0.00 0.00 0.00 0.00 99.93 17:18:01 3 0.21 0.00 0.00 0.07 0.02 99.71 17:18:01 4 27.25 0.00 1.64 6.09 0.07 64.95 17:18:01 5 11.27 0.00 0.81 54.11 0.05 33.75 17:18:01 6 3.67 0.00 0.38 0.48 0.02 95.45 17:18:01 7 0.36 0.00 0.05 0.00 0.02 99.57 17:19:01 all 18.47 0.00 1.52 3.88 0.07 76.06 17:19:01 0 16.52 0.00 1.72 0.03 0.05 81.67 17:19:01 1 11.91 0.00 1.30 17.29 0.05 69.45 17:19:01 2 22.41 0.00 1.75 2.05 0.07 73.71 17:19:01 3 14.27 0.00 0.87 0.90 0.08 83.87 17:19:01 4 19.84 0.00 0.72 0.89 0.07 78.49 17:19:01 5 23.39 0.00 1.86 5.91 0.07 68.78 17:19:01 6 17.27 0.00 1.32 0.63 0.07 80.72 17:19:01 7 22.21 0.00 2.58 3.36 0.07 71.78 17:20:01 all 14.61 0.00 2.21 2.81 0.08 80.30 17:20:01 0 15.45 0.00 2.72 0.29 0.07 81.48 17:20:01 1 15.32 0.00 2.54 3.90 0.10 78.14 17:20:01 2 15.28 0.00 2.55 10.32 0.08 71.77 17:20:01 3 14.10 0.00 2.17 1.06 0.10 82.58 17:20:01 4 14.07 0.00 1.99 1.09 0.07 82.78 17:20:01 5 16.92 0.00 2.00 0.25 0.08 80.75 17:20:01 6 11.90 0.00 1.55 1.11 0.10 85.34 17:20:01 7 13.83 0.00 2.15 4.42 0.08 79.51 Average: all 11.52 0.00 1.29 6.72 0.05 80.42 Average: 0 9.34 0.00 1.43 7.85 0.04 81.34 Average: 1 10.43 0.00 1.32 9.89 0.05 78.31 Average: 2 10.81 0.00 1.18 3.51 0.05 84.45 Average: 3 7.87 0.00 0.91 1.37 0.05 89.80 Average: 4 17.65 0.00 1.64 2.48 0.06 78.18 Average: 5 13.41 0.00 1.33 23.27 0.06 61.93 Average: 6 10.58 0.00 1.01 2.50 0.06 85.85 Average: 7 12.11 0.00 1.52 2.85 0.05 83.46