22:38:40 Started by timer 22:38:40 Running as SYSTEM 22:38:40 [EnvInject] - Loading node environment variables. 22:38:40 Building remotely on prd-ubuntu1804-builder-4c-4g-82701 (ubuntu1804-builder-4c-4g) in workspace /w/workspace/sdc-sdc-distribution-client-maven-clm-master 22:38:40 [ssh-agent] Looking for ssh-agent implementation... 22:38:41 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 22:38:41 $ ssh-agent 22:38:41 SSH_AUTH_SOCK=/tmp/ssh-d0vuCPnNofgk/agent.1648 22:38:41 SSH_AGENT_PID=1650 22:38:41 [ssh-agent] Started. 22:38:41 Running ssh-add (command line suppressed) 22:38:41 Identity added: /w/workspace/sdc-sdc-distribution-client-maven-clm-master@tmp/private_key_8694860845931050795.key (/w/workspace/sdc-sdc-distribution-client-maven-clm-master@tmp/private_key_8694860845931050795.key) 22:38:41 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 22:38:41 The recommended git tool is: NONE 22:38:42 using credential onap-jenkins-ssh 22:38:42 Wiping out workspace first. 22:38:42 Cloning the remote Git repository 22:38:42 Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client 22:38:42 > git init /w/workspace/sdc-sdc-distribution-client-maven-clm-master # timeout=10 22:38:42 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client 22:38:42 > git --version # timeout=10 22:38:42 > git --version # 'git version 2.17.1' 22:38:42 using GIT_SSH to set credentials Gerrit user 22:38:42 Verifying host key using manually-configured host key entries 22:38:42 > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client +refs/heads/*:refs/remotes/origin/* # timeout=10 22:38:43 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client # timeout=10 22:38:43 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 22:38:43 Avoid second fetch 22:38:43 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 22:38:43 Checking out Revision 09a9061fb4eef8a7b54fb35ae9391837939ea155 (refs/remotes/origin/master) 22:38:43 > git config core.sparsecheckout # timeout=10 22:38:43 > git checkout -f 09a9061fb4eef8a7b54fb35ae9391837939ea155 # timeout=10 22:38:44 Commit message: "Release 2.1.1" 22:38:44 > git rev-list --no-walk 09a9061fb4eef8a7b54fb35ae9391837939ea155 # timeout=10 22:38:47 provisioning config files... 22:38:47 copy managed file [npmrc] to file:/home/jenkins/.npmrc 22:38:47 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 22:38:47 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins3406910114941878589.sh 22:38:47 ---> python-tools-install.sh 22:38:47 Setup pyenv: 22:38:47 * system (set by /opt/pyenv/version) 22:38:47 * 3.8.13 (set by /opt/pyenv/version) 22:38:47 * 3.9.13 (set by /opt/pyenv/version) 22:38:47 * 3.10.6 (set by /opt/pyenv/version) 22:38:51 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-TIdl 22:38:51 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 22:38:54 lf-activate-venv(): INFO: Installing: lftools 22:39:22 lf-activate-venv(): INFO: Adding /tmp/venv-TIdl/bin to PATH 22:39:22 Generating Requirements File 22:39:41 Python 3.10.6 22:39:42 pip 24.3.1 from /tmp/venv-TIdl/lib/python3.10/site-packages/pip (python 3.10) 22:39:42 appdirs==1.4.4 22:39:42 argcomplete==3.5.1 22:39:42 aspy.yaml==1.3.0 22:39:42 attrs==24.2.0 22:39:42 autopage==0.5.2 22:39:42 beautifulsoup4==4.12.3 22:39:42 boto3==1.35.57 22:39:42 botocore==1.35.57 22:39:42 bs4==0.0.2 22:39:42 cachetools==5.5.0 22:39:42 certifi==2024.8.30 22:39:42 cffi==1.17.1 22:39:42 cfgv==3.4.0 22:39:42 chardet==5.2.0 22:39:42 charset-normalizer==3.4.0 22:39:42 click==8.1.7 22:39:42 cliff==4.7.0 22:39:42 cmd2==2.5.4 22:39:42 cryptography==3.3.2 22:39:42 debtcollector==3.0.0 22:39:42 decorator==5.1.1 22:39:42 defusedxml==0.7.1 22:39:42 Deprecated==1.2.14 22:39:42 distlib==0.3.9 22:39:42 dnspython==2.7.0 22:39:42 docker==4.2.2 22:39:42 dogpile.cache==1.3.3 22:39:42 durationpy==0.9 22:39:42 email_validator==2.2.0 22:39:42 filelock==3.16.1 22:39:42 future==1.0.0 22:39:42 gitdb==4.0.11 22:39:42 GitPython==3.1.43 22:39:42 google-auth==2.36.0 22:39:42 httplib2==0.22.0 22:39:42 identify==2.6.2 22:39:42 idna==3.10 22:39:42 importlib-resources==1.5.0 22:39:42 iso8601==2.1.0 22:39:42 Jinja2==3.1.4 22:39:42 jmespath==1.0.1 22:39:42 jsonpatch==1.33 22:39:42 jsonpointer==3.0.0 22:39:42 jsonschema==4.23.0 22:39:42 jsonschema-specifications==2024.10.1 22:39:42 keystoneauth1==5.8.0 22:39:42 kubernetes==31.0.0 22:39:42 lftools==0.37.10 22:39:42 lxml==5.3.0 22:39:42 MarkupSafe==3.0.2 22:39:42 msgpack==1.1.0 22:39:42 multi_key_dict==2.0.3 22:39:42 munch==4.0.0 22:39:42 netaddr==1.3.0 22:39:42 netifaces==0.11.0 22:39:42 niet==1.4.2 22:39:42 nodeenv==1.9.1 22:39:42 oauth2client==4.1.3 22:39:42 oauthlib==3.2.2 22:39:42 openstacksdk==4.1.0 22:39:42 os-client-config==2.1.0 22:39:42 os-service-types==1.7.0 22:39:42 osc-lib==3.1.0 22:39:42 oslo.config==9.6.0 22:39:42 oslo.context==5.6.0 22:39:42 oslo.i18n==6.4.0 22:39:42 oslo.log==6.1.2 22:39:42 oslo.serialization==5.5.0 22:39:42 oslo.utils==7.4.0 22:39:42 packaging==24.2 22:39:42 pbr==6.1.0 22:39:42 platformdirs==4.3.6 22:39:42 prettytable==3.12.0 22:39:42 psutil==6.1.0 22:39:42 pyasn1==0.6.1 22:39:42 pyasn1_modules==0.4.1 22:39:42 pycparser==2.22 22:39:42 pygerrit2==2.0.15 22:39:42 PyGithub==2.5.0 22:39:42 PyJWT==2.9.0 22:39:42 PyNaCl==1.5.0 22:39:42 pyparsing==2.4.7 22:39:42 pyperclip==1.9.0 22:39:42 pyrsistent==0.20.0 22:39:42 python-cinderclient==9.6.0 22:39:42 python-dateutil==2.9.0.post0 22:39:42 python-heatclient==4.0.0 22:39:42 python-jenkins==1.8.2 22:39:42 python-keystoneclient==5.5.0 22:39:42 python-magnumclient==4.7.0 22:39:42 python-openstackclient==7.2.1 22:39:42 python-swiftclient==4.6.0 22:39:42 PyYAML==6.0.2 22:39:42 referencing==0.35.1 22:39:42 requests==2.32.3 22:39:42 requests-oauthlib==2.0.0 22:39:42 requestsexceptions==1.4.0 22:39:42 rfc3986==2.0.0 22:39:42 rpds-py==0.21.0 22:39:42 rsa==4.9 22:39:42 ruamel.yaml==0.18.6 22:39:42 ruamel.yaml.clib==0.2.12 22:39:42 s3transfer==0.10.3 22:39:42 simplejson==3.19.3 22:39:42 six==1.16.0 22:39:42 smmap==5.0.1 22:39:42 soupsieve==2.6 22:39:42 stevedore==5.3.0 22:39:42 tabulate==0.9.0 22:39:42 toml==0.10.2 22:39:42 tomlkit==0.13.2 22:39:42 tqdm==4.67.0 22:39:42 typing_extensions==4.12.2 22:39:42 tzdata==2024.2 22:39:42 urllib3==1.26.20 22:39:42 virtualenv==20.27.1 22:39:42 wcwidth==0.2.13 22:39:42 websocket-client==1.8.0 22:39:42 wrapt==1.16.0 22:39:42 xdg==6.0.0 22:39:42 xmltodict==0.14.2 22:39:42 yq==3.4.3 22:39:42 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/sh -xe /tmp/jenkins3493697765873758088.sh 22:39:42 + echo quiet=on 22:39:42 Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 on prd-ubuntu1804-builder-4c-4g-82701 22:39:43 [sdc-sdc-distribution-client-maven-clm-master] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn -DGERRIT_BRANCH=master -Dsha1=origin/master -DMAVEN_OPTS= -DPROJECT=sdc/sdc-distribution-client -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn35/bin/mvn -DGERRIT_REFSPEC=refs/heads/master -DM2_HOME=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 -DSTREAM=master "-DARCHIVE_ARTIFACTS=**/*.log **/hs_err_*.log **/target/**/feature.xml **/target/failsafe-reports/failsafe-summary.xml **/target/surefire-reports/*-output.txt 22:39:43 " -DNEXUS_IQ_STAGE=build -DMAVEN_PARAMS= -DGERRIT_PROJECT=sdc/sdc-distribution-client --version 22:39:43 Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) 22:39:43 Maven home: /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 22:39:43 Java version: 11.0.16, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 22:39:43 Default locale: en, platform encoding: UTF-8 22:39:43 OS name: "linux", version: "4.15.0-194-generic", arch: "amd64", family: "unix" 22:39:43 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/sh -xe /tmp/jenkins13385004179421609586.sh 22:39:43 + rm /home/jenkins/.wgetrc 22:39:43 [EnvInject] - Injecting environment variables from a build step. 22:39:43 [EnvInject] - Injecting as environment variables the properties content 22:39:43 SET_JDK_VERSION=openjdk11 22:39:43 GIT_URL="git://cloud.onap.org/mirror" 22:39:43 22:39:43 [EnvInject] - Variables injected successfully. 22:39:43 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/sh /tmp/jenkins1443180127448433337.sh 22:39:43 ---> update-java-alternatives.sh 22:39:43 ---> Updating Java version 22:39:43 ---> Ubuntu/Debian system detected 22:39:43 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 22:39:43 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 22:39:43 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 22:39:43 openjdk version "11.0.16" 2022-07-19 22:39:43 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) 22:39:43 OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) 22:39:43 JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 22:39:43 [EnvInject] - Injecting environment variables from a build step. 22:39:43 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 22:39:43 [EnvInject] - Variables injected successfully. 22:39:43 provisioning config files... 22:39:43 copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-maven-clm-master@tmp/config3272048468459474082tmp 22:39:43 copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-maven-clm-master@tmp/config8797132261051030011tmp 22:39:43 [EnvInject] - Injecting environment variables from a build step. 22:39:43 [EnvInject] - Injecting as environment variables the properties content 22:39:43 MAVEN_GOALS=clean install 22:39:43 22:39:43 [EnvInject] - Variables injected successfully. 22:39:43 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash -l /tmp/jenkins5048030747083821673.sh 22:39:44 ---> common-variables.sh 22:39:44 --show-version --batch-mode -Djenkins -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dmaven.repo.local=/tmp/r -Dorg.ops4j.pax.url.mvn.localRepository=/tmp/r 22:39:44 ---> sonatype-clm.sh 22:39:44 Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) 22:39:44 Maven home: /w/tools/hudson.tasks.Maven_MavenInstallation/mvn35 22:39:44 Java version: 11.0.16, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 22:39:44 Default locale: en, platform encoding: UTF-8 22:39:44 OS name: "linux", version: "4.15.0-194-generic", arch: "amd64", family: "unix" 22:39:44 [INFO] Scanning for projects... 22:39:45 [INFO] ------------------------------------------------------------------------ 22:39:45 [INFO] Reactor Build Order: 22:39:45 [INFO] 22:39:45 [INFO] sdc-sdc-distribution-client [pom] 22:39:45 [INFO] sdc-distribution-client [jar] 22:39:45 [INFO] sdc-distribution-ci [jar] 22:39:47 [INFO] 22:39:47 [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- 22:39:47 [INFO] Building sdc-sdc-distribution-client 2.1.1-SNAPSHOT [1/3] 22:39:47 [INFO] --------------------------------[ pom ]--------------------------------- 22:39:47 [INFO] 22:39:47 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- 22:39:47 [INFO] 22:39:47 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- 22:39:50 [INFO] 22:39:50 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- 22:39:50 [INFO] 22:39:50 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- 22:39:51 [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:39:51 [INFO] 22:39:51 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- 22:39:51 [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:39:51 [INFO] 22:39:51 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- 22:39:54 [INFO] 22:39:54 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- 22:39:54 [INFO] 22:39:54 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- 22:39:54 [INFO] Skipping JaCoCo execution due to missing execution data file. 22:39:54 [INFO] 22:39:54 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- 22:39:55 [INFO] Not executing Javadoc as the project is not a Java classpath-capable package 22:39:55 [INFO] 22:39:55 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- 22:39:55 [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:39:55 [INFO] 22:39:55 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- 22:39:56 [INFO] No tests to run. 22:39:56 [INFO] 22:39:56 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- 22:39:56 [INFO] Skipping JaCoCo execution due to missing execution data file. 22:39:56 [INFO] 22:39:56 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- 22:39:56 [INFO] 22:39:56 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- 22:39:56 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.1.1-SNAPSHOT/sdc-main-distribution-client-2.1.1-SNAPSHOT.pom 22:39:56 [INFO] 22:39:56 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ sdc-main-distribution-client --- 22:39:58 [INFO] org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client:pom:2.1.1-SNAPSHOT 22:39:58 [INFO] 22:39:58 [INFO] --- clm-maven-plugin:2.48.3-01:index (default-cli) @ sdc-main-distribution-client --- 22:39:58 [INFO] Saved module information to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/target/sonatype-clm/module.xml 22:39:58 [INFO] 22:39:58 [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- 22:39:58 [INFO] Building sdc-distribution-client 2.1.1-SNAPSHOT [2/3] 22:39:58 [INFO] --------------------------------[ jar ]--------------------------------- 22:40:03 [INFO] 22:40:03 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- 22:40:03 [INFO] 22:40:03 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- 22:40:03 [INFO] 22:40:03 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- 22:40:03 [INFO] 22:40:03 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- 22:40:03 [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:03 [INFO] 22:40:03 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- 22:40:03 [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:03 [INFO] 22:40:03 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- 22:40:03 [INFO] 22:40:03 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- 22:40:03 [INFO] 22:40:03 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- 22:40:03 [INFO] Using 'UTF-8' encoding to copy filtered resources. 22:40:03 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/main/resources 22:40:03 [INFO] 22:40:03 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- 22:40:04 [INFO] Changes detected - recompiling the module! 22:40:04 [INFO] Compiling 57 source files to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/classes 22:40:06 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Some input files use or override a deprecated API. 22:40:06 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Recompile with -Xlint:deprecation for details. 22:40:06 [INFO] 22:40:06 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- 22:40:06 [INFO] Using 'UTF-8' encoding to copy filtered resources. 22:40:06 [INFO] Copying 8 resources 22:40:06 [INFO] 22:40:06 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- 22:40:06 [INFO] Changes detected - recompiling the module! 22:40:06 [INFO] Compiling 22 source files to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/test-classes 22:40:07 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. 22:40:07 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. 22:40:07 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. 22:40:07 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. 22:40:07 [INFO] 22:40:07 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- 22:40:07 [INFO] 22:40:07 [INFO] ------------------------------------------------------- 22:40:07 [INFO] T E S T S 22:40:07 [INFO] ------------------------------------------------------- 22:40:09 [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest 22:40:10 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.165 s - in org.onap.sdc.http.HttpSdcClientResponseTest 22:40:10 [INFO] Running org.onap.sdc.http.HttpSdcClientTest 22:40:11 22:40:11.001 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 22:40:11 22:40:11.627 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 22:40:11 22:40:11.629 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 22:40:11 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.455 s - in org.onap.sdc.http.HttpSdcClientTest 22:40:11 [INFO] Running org.onap.sdc.http.HttpClientFactoryTest 22:40:12 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.426 s - in org.onap.sdc.http.HttpClientFactoryTest 22:40:12 [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest 22:40:12 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 s - in org.onap.sdc.http.HttpRequestFactoryTest 22:40:12 [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 22:40:12 22:40:12.519 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 882a7182-c27b-48ad-8b60-52b0ba2ff5c8 url= /sdc/v1/artifactTypes 22:40:12 22:40:12.521 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 448176275 22:40:12 22:40:12.526 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 22:40:12 22:40:12.527 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 22:40:12 22:40:12.528 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 22:40:12 22:40:12.542 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= bd31ffef-4f1c-46c0-9b22-99ef967406ee url= /sdc/v1/artifactTypes 22:40:12 22:40:12.545 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: 22:40:12 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 22:40:12 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 22:40:12 at java.base/java.io.InputStream.read(InputStream.java:271) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 22:40:12 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 22:40:12 at java.base/java.io.Reader.read(Reader.java:229) 22:40:12 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 22:40:12 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 22:40:12 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 22:40:12 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 22:40:12 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 22:40:12 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) 22:40:12 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:12 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:12 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:12 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:12 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:12 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:12 at org.mockito.Answers.answer(Answers.java:99) 22:40:12 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:12 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:12 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 22:40:12 at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) 22:40:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:12 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:12 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:12 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:12 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:12 22:40:12.628 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 22:40:12 22:40:12.632 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 249447a9-502a-4a72-bfe0-c13f04d00954 url= /sdc/v1/artifactTypes 22:40:12 22:40:12.632 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 753964406 22:40:12 22:40:12.633 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 22:40:12 22:40:12.633 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 22:40:12 22:40:12.635 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 3a3a989d-a451-4f0a-be54-61896e20caf1 url= /sdc/v1/distributionKafkaData 22:40:12 22:40:12.636 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 2041316009 22:40:12 22:40:12.636 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 22:40:12 22:40:12.637 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 22:40:12 22:40:12.643 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 873287880 22:40:12 22:40:12.643 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 22:40:12 22:40:12.643 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: 22:40:12 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 22:40:12 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 22:40:12 at java.base/java.io.InputStream.read(InputStream.java:271) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 22:40:12 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 22:40:12 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 22:40:12 at java.base/java.io.Reader.read(Reader.java:229) 22:40:12 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 22:40:12 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 22:40:12 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 22:40:12 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 22:40:12 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 22:40:12 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) 22:40:12 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:12 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:12 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:12 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:12 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:12 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:12 at org.mockito.Answers.answer(Answers.java:99) 22:40:12 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:12 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:12 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:12 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:12 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) 22:40:12 at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) 22:40:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:12 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:12 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:12 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:12 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:12 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:12 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:12 22:40:12.666 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 69c8b380-c1b3-4695-b3df-364c6a6fff1d url= /sdc/v1/artifactTypes 22:40:12 22:40:12.673 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 61fc7e91-74ec-45ea-87de-6d8a1051008b url= /sdc/v1/distributionKafkaData 22:40:12 [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.573 s - in org.onap.sdc.http.SdcConnectorClientTest 22:40:12 [INFO] Running org.onap.sdc.utils.SdcKafkaTest 22:40:12 22:40:12.691 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 22:40:12 22:40:12.868 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:36225 22:40:12 22:40:12.869 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 22:40:12 22:40:12.869 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 22:40:12 22:40:12.869 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 22:40:12 22:40:12.871 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 22:40:12 22:40:12.905 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@40f0bbb7 22:40:12 22:40:12.910 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit4159603097563790 snapDir:/tmp/kafka-unit4159603097563790 22:40:12 22:40:12.910 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 22:40:12 22:40:12.932 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 22:40:12 22:40:12.933 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 22:40:12 22:40:12.934 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 22:40:12 22:40:12.934 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-builder-4c-4g-82701 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/classes:/tmp/r/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/tmp/r/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/tmp/r/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/tmp/r/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/tmp/r/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/tmp/r/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/tmp/r/org/json/json/20220320/json-20220320.jar:/tmp/r/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/tmp/r/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/tmp/r/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/tmp/r/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/tmp/r/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/tmp/r/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/tmp/r/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/tmp/r/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/tmp/r/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/tmp/r/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/tmp/r/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/tmp/r/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/tmp/r/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/tmp/r/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/tmp/r/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/tmp/r/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/tmp/r/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/tmp/r/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/tmp/r/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/tmp/r/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/tmp/r/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/tmp/r/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/tmp/r/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/tmp/r/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/tmp/r/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/tmp/r/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/tmp/r/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/tmp/r/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/tmp/r/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/tmp/r/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/tmp/r/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/tmp/r/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/tmp/r/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/tmp/r/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/tmp/r/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/tmp/r/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/tmp/r/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/tmp/r/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/tmp/r/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/tmp/r/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/tmp/r/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/tmp/r/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/tmp/r/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/tmp/r/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/tmp/r/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/tmp/r/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/tmp/r/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/tmp/r/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/tmp/r/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/tmp/r/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/tmp/r/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/tmp/r/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/tmp/r/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/tmp/r/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/tmp/r/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/tmp/r/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/tmp/r/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/tmp/r/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/tmp/r/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/tmp/r/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/tmp/r/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/tmp/r/commons-cli/commons-cli/1.4/commons-cli-1.4.jar: 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-194-generic 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=252MB 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=4012MB 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=310MB 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 22:40:12 22:40:12.935 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 22:40:12 22:40:12.936 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 22:40:12 22:40:12.936 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 22:40:12 22:40:12.936 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 22:40:12 22:40:12.937 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 22:40:12 22:40:12.939 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 22:40:12 22:40:12.939 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 22:40:12 22:40:12.942 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 22:40:12 22:40:12.942 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 22:40:12 22:40:12.944 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 22:40:12 22:40:12.944 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 22:40:12 22:40:12.944 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 22:40:12 22:40:12.944 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 22:40:12 22:40:12.944 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 22:40:12 22:40:12.945 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 22:40:12 22:40:12.947 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 22:40:12 22:40:12.947 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 22:40:12 22:40:12.947 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit4159603097563790/version-2 snapdir /tmp/kafka-unit4159603097563790/version-2 22:40:12 22:40:12.965 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 22:40:12 22:40:12.983 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 22:40:13 22:40:13.007 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 22:40:13 22:40:13.011 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 22:40:13 22:40:13.013 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 22:40:13 22:40:13.027 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:36225 22:40:13 22:40:13.060 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 22:40:13 22:40:13.060 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 22:40:13 22:40:13.060 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 22:40:13 22:40:13.060 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 22:40:13 22:40:13.067 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 22:40:13 22:40:13.068 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit4159603097563790/version-2/snapshot.0 22:40:13 22:40:13.072 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 22:40:13 22:40:13.072 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit4159603097563790/version-2/snapshot.0 22:40:13 22:40:13.072 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 0 ms 22:40:13 22:40:13.092 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 22:40:13 22:40:13.096 [ProcessThread(sid:0 cport:36225):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 22:40:13 22:40:13.111 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 22:40:13 22:40:13.114 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 22:40:14 22:40:14.667 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 22:40:14 advertised.listeners = SASL_PLAINTEXT://localhost:43439 22:40:14 alter.config.policy.class.name = null 22:40:14 alter.log.dirs.replication.quota.window.num = 11 22:40:14 alter.log.dirs.replication.quota.window.size.seconds = 1 22:40:14 authorizer.class.name = 22:40:14 auto.create.topics.enable = true 22:40:14 auto.leader.rebalance.enable = true 22:40:14 background.threads = 10 22:40:14 broker.heartbeat.interval.ms = 2000 22:40:14 broker.id = 1 22:40:14 broker.id.generation.enable = true 22:40:14 broker.rack = null 22:40:14 broker.session.timeout.ms = 9000 22:40:14 client.quota.callback.class = null 22:40:14 compression.type = producer 22:40:14 connection.failed.authentication.delay.ms = 100 22:40:14 connections.max.idle.ms = 600000 22:40:14 connections.max.reauth.ms = 0 22:40:14 control.plane.listener.name = null 22:40:14 controlled.shutdown.enable = true 22:40:14 controlled.shutdown.max.retries = 3 22:40:14 controlled.shutdown.retry.backoff.ms = 5000 22:40:14 controller.listener.names = null 22:40:14 controller.quorum.append.linger.ms = 25 22:40:14 controller.quorum.election.backoff.max.ms = 1000 22:40:14 controller.quorum.election.timeout.ms = 1000 22:40:14 controller.quorum.fetch.timeout.ms = 2000 22:40:14 controller.quorum.request.timeout.ms = 2000 22:40:14 controller.quorum.retry.backoff.ms = 20 22:40:14 controller.quorum.voters = [] 22:40:14 controller.quota.window.num = 11 22:40:14 controller.quota.window.size.seconds = 1 22:40:14 controller.socket.timeout.ms = 30000 22:40:14 create.topic.policy.class.name = null 22:40:14 default.replication.factor = 1 22:40:14 delegation.token.expiry.check.interval.ms = 3600000 22:40:14 delegation.token.expiry.time.ms = 86400000 22:40:14 delegation.token.master.key = null 22:40:14 delegation.token.max.lifetime.ms = 604800000 22:40:14 delegation.token.secret.key = null 22:40:14 delete.records.purgatory.purge.interval.requests = 1 22:40:14 delete.topic.enable = true 22:40:14 early.start.listeners = null 22:40:14 fetch.max.bytes = 57671680 22:40:14 fetch.purgatory.purge.interval.requests = 1000 22:40:14 group.initial.rebalance.delay.ms = 3000 22:40:14 group.max.session.timeout.ms = 1800000 22:40:14 group.max.size = 2147483647 22:40:14 group.min.session.timeout.ms = 6000 22:40:14 initial.broker.registration.timeout.ms = 60000 22:40:14 inter.broker.listener.name = null 22:40:14 inter.broker.protocol.version = 3.3-IV3 22:40:14 kafka.metrics.polling.interval.secs = 10 22:40:14 kafka.metrics.reporters = [] 22:40:14 leader.imbalance.check.interval.seconds = 300 22:40:14 leader.imbalance.per.broker.percentage = 10 22:40:14 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 22:40:14 listeners = SASL_PLAINTEXT://localhost:43439 22:40:14 log.cleaner.backoff.ms = 15000 22:40:14 log.cleaner.dedupe.buffer.size = 134217728 22:40:14 log.cleaner.delete.retention.ms = 86400000 22:40:14 log.cleaner.enable = true 22:40:14 log.cleaner.io.buffer.load.factor = 0.9 22:40:14 log.cleaner.io.buffer.size = 524288 22:40:14 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 22:40:14 log.cleaner.max.compaction.lag.ms = 9223372036854775807 22:40:14 log.cleaner.min.cleanable.ratio = 0.5 22:40:14 log.cleaner.min.compaction.lag.ms = 0 22:40:14 log.cleaner.threads = 1 22:40:14 log.cleanup.policy = [delete] 22:40:14 log.dir = /tmp/kafka-unit3067120233997490679 22:40:14 log.dirs = null 22:40:14 log.flush.interval.messages = 1 22:40:14 log.flush.interval.ms = null 22:40:14 log.flush.offset.checkpoint.interval.ms = 60000 22:40:14 log.flush.scheduler.interval.ms = 9223372036854775807 22:40:14 log.flush.start.offset.checkpoint.interval.ms = 60000 22:40:14 log.index.interval.bytes = 4096 22:40:14 log.index.size.max.bytes = 10485760 22:40:14 log.message.downconversion.enable = true 22:40:14 log.message.format.version = 3.0-IV1 22:40:14 log.message.timestamp.difference.max.ms = 9223372036854775807 22:40:14 log.message.timestamp.type = CreateTime 22:40:14 log.preallocate = false 22:40:14 log.retention.bytes = -1 22:40:14 log.retention.check.interval.ms = 300000 22:40:14 log.retention.hours = 168 22:40:14 log.retention.minutes = null 22:40:14 log.retention.ms = null 22:40:14 log.roll.hours = 168 22:40:14 log.roll.jitter.hours = 0 22:40:14 log.roll.jitter.ms = null 22:40:14 log.roll.ms = null 22:40:14 log.segment.bytes = 1073741824 22:40:14 log.segment.delete.delay.ms = 60000 22:40:14 max.connection.creation.rate = 2147483647 22:40:14 max.connections = 2147483647 22:40:14 max.connections.per.ip = 2147483647 22:40:14 max.connections.per.ip.overrides = 22:40:14 max.incremental.fetch.session.cache.slots = 1000 22:40:14 message.max.bytes = 1048588 22:40:14 metadata.log.dir = null 22:40:14 metadata.log.max.record.bytes.between.snapshots = 20971520 22:40:14 metadata.log.segment.bytes = 1073741824 22:40:14 metadata.log.segment.min.bytes = 8388608 22:40:14 metadata.log.segment.ms = 604800000 22:40:14 metadata.max.idle.interval.ms = 500 22:40:14 metadata.max.retention.bytes = -1 22:40:14 metadata.max.retention.ms = 604800000 22:40:14 metric.reporters = [] 22:40:14 metrics.num.samples = 2 22:40:14 metrics.recording.level = INFO 22:40:14 metrics.sample.window.ms = 30000 22:40:14 min.insync.replicas = 1 22:40:14 node.id = 1 22:40:14 num.io.threads = 2 22:40:14 num.network.threads = 2 22:40:14 num.partitions = 1 22:40:14 num.recovery.threads.per.data.dir = 1 22:40:14 num.replica.alter.log.dirs.threads = null 22:40:14 num.replica.fetchers = 1 22:40:14 offset.metadata.max.bytes = 4096 22:40:14 offsets.commit.required.acks = -1 22:40:14 offsets.commit.timeout.ms = 5000 22:40:14 offsets.load.buffer.size = 5242880 22:40:14 offsets.retention.check.interval.ms = 600000 22:40:14 offsets.retention.minutes = 10080 22:40:14 offsets.topic.compression.codec = 0 22:40:14 offsets.topic.num.partitions = 50 22:40:14 offsets.topic.replication.factor = 1 22:40:14 offsets.topic.segment.bytes = 104857600 22:40:14 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 22:40:14 password.encoder.iterations = 4096 22:40:14 password.encoder.key.length = 128 22:40:14 password.encoder.keyfactory.algorithm = null 22:40:14 password.encoder.old.secret = null 22:40:14 password.encoder.secret = null 22:40:14 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 22:40:14 process.roles = [] 22:40:14 producer.purgatory.purge.interval.requests = 1000 22:40:14 queued.max.request.bytes = -1 22:40:14 queued.max.requests = 500 22:40:14 quota.window.num = 11 22:40:14 quota.window.size.seconds = 1 22:40:14 remote.log.index.file.cache.total.size.bytes = 1073741824 22:40:14 remote.log.manager.task.interval.ms = 30000 22:40:14 remote.log.manager.task.retry.backoff.max.ms = 30000 22:40:14 remote.log.manager.task.retry.backoff.ms = 500 22:40:14 remote.log.manager.task.retry.jitter = 0.2 22:40:14 remote.log.manager.thread.pool.size = 10 22:40:14 remote.log.metadata.manager.class.name = null 22:40:14 remote.log.metadata.manager.class.path = null 22:40:14 remote.log.metadata.manager.impl.prefix = null 22:40:14 remote.log.metadata.manager.listener.name = null 22:40:14 remote.log.reader.max.pending.tasks = 100 22:40:14 remote.log.reader.threads = 10 22:40:14 remote.log.storage.manager.class.name = null 22:40:14 remote.log.storage.manager.class.path = null 22:40:14 remote.log.storage.manager.impl.prefix = null 22:40:14 remote.log.storage.system.enable = false 22:40:14 replica.fetch.backoff.ms = 1000 22:40:14 replica.fetch.max.bytes = 1048576 22:40:14 replica.fetch.min.bytes = 1 22:40:14 replica.fetch.response.max.bytes = 10485760 22:40:14 replica.fetch.wait.max.ms = 500 22:40:14 replica.high.watermark.checkpoint.interval.ms = 5000 22:40:14 replica.lag.time.max.ms = 30000 22:40:14 replica.selector.class = null 22:40:14 replica.socket.receive.buffer.bytes = 65536 22:40:14 replica.socket.timeout.ms = 30000 22:40:14 replication.quota.window.num = 11 22:40:14 replication.quota.window.size.seconds = 1 22:40:14 request.timeout.ms = 30000 22:40:14 reserved.broker.max.id = 1000 22:40:14 sasl.client.callback.handler.class = null 22:40:14 sasl.enabled.mechanisms = [PLAIN] 22:40:14 sasl.jaas.config = null 22:40:14 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:14 sasl.kerberos.min.time.before.relogin = 60000 22:40:14 sasl.kerberos.principal.to.local.rules = [DEFAULT] 22:40:14 sasl.kerberos.service.name = null 22:40:14 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:14 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:14 sasl.login.callback.handler.class = null 22:40:14 sasl.login.class = null 22:40:14 sasl.login.connect.timeout.ms = null 22:40:14 sasl.login.read.timeout.ms = null 22:40:14 sasl.login.refresh.buffer.seconds = 300 22:40:14 sasl.login.refresh.min.period.seconds = 60 22:40:14 sasl.login.refresh.window.factor = 0.8 22:40:14 sasl.login.refresh.window.jitter = 0.05 22:40:14 sasl.login.retry.backoff.max.ms = 10000 22:40:14 sasl.login.retry.backoff.ms = 100 22:40:14 sasl.mechanism.controller.protocol = GSSAPI 22:40:14 sasl.mechanism.inter.broker.protocol = PLAIN 22:40:14 sasl.oauthbearer.clock.skew.seconds = 30 22:40:14 sasl.oauthbearer.expected.audience = null 22:40:14 sasl.oauthbearer.expected.issuer = null 22:40:14 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:14 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:14 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:14 sasl.oauthbearer.jwks.endpoint.url = null 22:40:14 sasl.oauthbearer.scope.claim.name = scope 22:40:14 sasl.oauthbearer.sub.claim.name = sub 22:40:14 sasl.oauthbearer.token.endpoint.url = null 22:40:14 sasl.server.callback.handler.class = null 22:40:14 sasl.server.max.receive.size = 524288 22:40:14 security.inter.broker.protocol = SASL_PLAINTEXT 22:40:14 security.providers = null 22:40:14 socket.connection.setup.timeout.max.ms = 30000 22:40:14 socket.connection.setup.timeout.ms = 10000 22:40:14 socket.listen.backlog.size = 50 22:40:14 socket.receive.buffer.bytes = 102400 22:40:14 socket.request.max.bytes = 104857600 22:40:14 socket.send.buffer.bytes = 102400 22:40:14 ssl.cipher.suites = [] 22:40:14 ssl.client.auth = none 22:40:14 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:14 ssl.endpoint.identification.algorithm = https 22:40:14 ssl.engine.factory.class = null 22:40:14 ssl.key.password = null 22:40:14 ssl.keymanager.algorithm = SunX509 22:40:14 ssl.keystore.certificate.chain = null 22:40:14 ssl.keystore.key = null 22:40:14 ssl.keystore.location = null 22:40:14 ssl.keystore.password = null 22:40:14 ssl.keystore.type = JKS 22:40:14 ssl.principal.mapping.rules = DEFAULT 22:40:14 ssl.protocol = TLSv1.3 22:40:14 ssl.provider = null 22:40:14 ssl.secure.random.implementation = null 22:40:14 ssl.trustmanager.algorithm = PKIX 22:40:14 ssl.truststore.certificates = null 22:40:14 ssl.truststore.location = null 22:40:14 ssl.truststore.password = null 22:40:14 ssl.truststore.type = JKS 22:40:14 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 22:40:14 transaction.max.timeout.ms = 900000 22:40:14 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 22:40:14 transaction.state.log.load.buffer.size = 5242880 22:40:14 transaction.state.log.min.isr = 1 22:40:14 transaction.state.log.num.partitions = 4 22:40:14 transaction.state.log.replication.factor = 1 22:40:14 transaction.state.log.segment.bytes = 104857600 22:40:14 transactional.id.expiration.ms = 604800000 22:40:14 unclean.leader.election.enable = false 22:40:14 zookeeper.clientCnxnSocket = null 22:40:14 zookeeper.connect = 127.0.0.1:36225 22:40:14 zookeeper.connection.timeout.ms = null 22:40:14 zookeeper.max.in.flight.requests = 10 22:40:14 zookeeper.session.timeout.ms = 30000 22:40:14 zookeeper.set.acl = false 22:40:14 zookeeper.ssl.cipher.suites = null 22:40:14 zookeeper.ssl.client.enable = false 22:40:14 zookeeper.ssl.crl.enable = false 22:40:14 zookeeper.ssl.enabled.protocols = null 22:40:14 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 22:40:14 zookeeper.ssl.keystore.location = null 22:40:14 zookeeper.ssl.keystore.password = null 22:40:14 zookeeper.ssl.keystore.type = null 22:40:14 zookeeper.ssl.ocsp.enable = false 22:40:14 zookeeper.ssl.protocol = TLSv1.2 22:40:14 zookeeper.ssl.truststore.location = null 22:40:14 zookeeper.ssl.truststore.password = null 22:40:14 zookeeper.ssl.truststore.type = null 22:40:14 22:40:14 22:40:14.751 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 22:40:14 22:40:14.888 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 22:40:14 22:40:14.894 [main] INFO kafka.server.KafkaServer - starting 22:40:14 22:40:14.894 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:36225 22:40:14 22:40:14.894 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 22:40:14 22:40:14.920 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:36225. 22:40:14 22:40:14.930 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 22:40:14 22:40:14.931 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-builder-4c-4g-82701 22:40:14 22:40:14.932 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 22:40:14 22:40:14.932 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 22:40:14 22:40:14.932 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 22:40:14 22:40:14.932 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/classes:/tmp/r/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/tmp/r/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/tmp/r/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/tmp/r/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/tmp/r/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/tmp/r/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/tmp/r/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/tmp/r/org/json/json/20220320/json-20220320.jar:/tmp/r/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/tmp/r/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/tmp/r/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/tmp/r/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/tmp/r/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/tmp/r/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/tmp/r/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/tmp/r/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/tmp/r/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/tmp/r/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/tmp/r/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/tmp/r/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/tmp/r/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/tmp/r/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/tmp/r/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/tmp/r/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/tmp/r/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/tmp/r/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/tmp/r/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/tmp/r/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/tmp/r/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/tmp/r/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/tmp/r/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/tmp/r/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/tmp/r/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/tmp/r/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/tmp/r/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/tmp/r/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/tmp/r/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/tmp/r/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/tmp/r/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/tmp/r/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/tmp/r/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/tmp/r/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/tmp/r/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/tmp/r/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/tmp/r/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/tmp/r/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/tmp/r/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/tmp/r/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/tmp/r/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/tmp/r/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/tmp/r/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/tmp/r/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/tmp/r/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/tmp/r/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/tmp/r/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/tmp/r/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/tmp/r/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/tmp/r/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/tmp/r/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/tmp/r/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/tmp/r/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/tmp/r/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/tmp/r/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/tmp/r/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/tmp/r/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/tmp/r/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/tmp/r/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/tmp/r/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/tmp/r/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/tmp/r/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/tmp/r/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/tmp/r/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/tmp/r/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/tmp/r/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/tmp/r/commons-cli/commons-cli/1.4/commons-cli-1.4.jar: 22:40:14 22:40:14.939 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 22:40:14 22:40:14.939 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 22:40:14 22:40:14.939 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 22:40:14 22:40:14.940 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 22:40:14 22:40:14.940 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 22:40:14 22:40:14.943 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-194-generic 22:40:14 22:40:14.943 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 22:40:14 22:40:14.943 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 22:40:14 22:40:14.944 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client 22:40:14 22:40:14.944 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=235MB 22:40:14 22:40:14.944 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=4012MB 22:40:14 22:40:14.945 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=372MB 22:40:14 22:40:14.951 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:36225 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@400b100d 22:40:14 22:40:14.956 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 22:40:14 22:40:14.967 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 22:40:14 22:40:14.971 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 22:40:14 22:40:14.972 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 22:40:14 22:40:14.978 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 22:40:14 22:40:14.979 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 22:40:14 22:40:14.980 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.Login - Client successfully logged in. 22:40:14 22:40:14.982 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 22:40:14 22:40:14.993 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:36225. 22:40:14 22:40:14.993 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 22:40:14 22:40:14.996 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:40122, server: localhost/127.0.0.1:36225 22:40:14 22:40:14.996 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:36225] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:40122 22:40:14 22:40:14.998 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:36225 22:40:15 22:40:15.007 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:40122 client's lastZxid is 0x0 22:40:15 22:40:15.009 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000001ca2b0000 22:40:15 22:40:15.009 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000001ca2b0000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:40122 22:40:15 22:40:15.011 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 22:40:15 22:40:15.012 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 22:40:15 22:40:15.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 22:40:15 22:40:15.020 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 22:40:15 22:40:15.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 22:40:15 22:40:15.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000001ca2b0000 with negotiated timeout 30000 for client /127.0.0.1:40122 22:40:15 22:40:15.030 [main-SendThread(127.0.0.1:36225)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:36225, session id = 0x1000001ca2b0000, negotiated timeout = 30000 22:40:15 22:40:15.033 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 22:40:15 22:40:15.034 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 22:40:15 22:40:15.035 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 22:40:15 22:40:15.037 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 22:40:15 22:40:15.037 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 22:40:15 22:40:15.037 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 22:40:15 22:40:15.040 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 22:40:15 22:40:15.042 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 22:40:15 22:40:15.043 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 22:40:15 22:40:15.044 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 22:40:15 22:40:15.044 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 22:40:15 22:40:15.044 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 22:40:15 22:40:15.082 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 22:40:15 22:40:15.097 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 22:40:15 22:40:15.098 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 22:40:15 22:40:15.098 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 22:40:15 22:40:15.101 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 22:40:15 22:40:15.101 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 22:40:15 22:40:15.102 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 22:40:15 22:40:15.103 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 22:40:15 22:40:15.109 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.109 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.111 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.111 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.111 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.116 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 22:40:15 22:40:15.117 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 22:40:15 22:40:15.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 22:40:15 22:40:15.120 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 22:40:15 22:40:15.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 1517200766 22:40:15 22:40:15.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 22:40:15 22:40:15.123 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 22:40:15 22:40:15.136 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.136 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 22:40:15 22:40:15.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:15 22:40:15.139 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1517200766 22:40:15 22:40:15.143 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1635161992 22:40:15 22:40:15.144 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 22:40:15 22:40:15.145 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:15 22:40:15.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 4442864141 22:40:15 22:40:15.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 22:40:15 22:40:15.145 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.147 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 4442864141 22:40:15 22:40:15.148 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 5833458599 22:40:15 22:40:15.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 22:40:15 22:40:15.149 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:15 22:40:15.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 6058448192 22:40:15 22:40:15.149 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 22:40:15 22:40:15.150 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 22:40:15 22:40:15.151 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.151 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.151 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.151 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.152 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.152 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 6058448192 22:40:15 22:40:15.152 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 2356635858 22:40:15 22:40:15.153 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 22:40:15 22:40:15.153 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:15 22:40:15.153 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 3299486234 22:40:15 22:40:15.153 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 22:40:15 22:40:15.153 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 22:40:15 22:40:15.155 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.155 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.156 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 22:40:15 22:40:15.156 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:15 22:40:15.156 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 22:40:15 22:40:15.157 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.157 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.157 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.157 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.157 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.158 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3299486234 22:40:15 22:40:15.158 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 4832561110 22:40:15 22:40:15.159 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 22:40:15 22:40:15.159 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.159 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 5930411719 22:40:15 22:40:15.159 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 22:40:15 22:40:15.159 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 22:40:15 22:40:15.161 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.161 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 5930411719 22:40:15 22:40:15.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 5302917185 22:40:15 22:40:15.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 22:40:15 22:40:15.164 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 8254912869 22:40:15 22:40:15.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 22:40:15 22:40:15.165 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 22:40:15 22:40:15.166 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.166 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.167 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 22:40:15 22:40:15.168 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:15 22:40:15.168 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 22:40:15 22:40:15.169 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.169 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.169 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.169 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.169 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.170 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8254912869 22:40:15 22:40:15.170 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 7666541101 22:40:15 22:40:15.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 22:40:15 22:40:15.171 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 22:40:15 22:40:15.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 11394926260 22:40:15 22:40:15.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 22:40:15 22:40:15.171 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 22:40:15 22:40:15.173 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.173 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.173 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.173 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.173 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.174 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 11394926260 22:40:15 22:40:15.174 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 8599648010 22:40:15 22:40:15.205 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 22:40:15 22:40:15.205 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 22:40:15 22:40:15.206 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 9990058496 22:40:15 22:40:15.206 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 22:40:15 22:40:15.207 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 9990058496 22:40:15 22:40:15.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 10681017977 22:40:15 22:40:15.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 22:40:15 22:40:15.211 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:15 22:40:15.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 14908802736 22:40:15 22:40:15.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 22:40:15 22:40:15.212 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14908802736 22:40:15 22:40:15.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 15696117461 22:40:15 22:40:15.215 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 22:40:15 22:40:15.215 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 22:40:15 22:40:15.215 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 19572315082 22:40:15 22:40:15.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 22:40:15 22:40:15.216 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 22:40:15 22:40:15.217 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.217 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.218 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.218 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.218 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.218 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 19572315082 22:40:15 22:40:15.218 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 18952191129 22:40:15 22:40:15.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 22:40:15 22:40:15.219 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 22:40:15 22:40:15.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 21219984518 22:40:15 22:40:15.219 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 22:40:15 22:40:15.220 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.221 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 21219984518 22:40:15 22:40:15.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 20397524479 22:40:15 22:40:15.223 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 22:40:15 22:40:15.223 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 22:40:15 22:40:15.223 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 22995070401 22:40:15 22:40:15.223 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 22:40:15 22:40:15.224 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22995070401 22:40:15 22:40:15.225 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 23050676924 22:40:15 22:40:15.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 22:40:15 22:40:15.226 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 24387442086 22:40:15 22:40:15.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 22:40:15 22:40:15.227 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 22:40:15 22:40:15.228 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.228 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.228 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.228 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.229 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.229 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24387442086 22:40:15 22:40:15.229 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 26423047107 22:40:15 22:40:15.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 22:40:15 22:40:15.230 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 27852347918 22:40:15 22:40:15.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 22:40:15 22:40:15.230 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 22:40:15 22:40:15.231 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.231 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.232 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.232 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.232 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.232 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27852347918 22:40:15 22:40:15.232 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 26988259663 22:40:15 22:40:15.233 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 22:40:15 22:40:15.233 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.233 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 30058125684 22:40:15 22:40:15.233 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 22:40:15 22:40:15.233 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 22:40:15 22:40:15.234 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.234 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.234 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.234 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.234 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.235 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 30058125684 22:40:15 22:40:15.235 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30718486862 22:40:15 22:40:15.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 22:40:15 22:40:15.236 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 32366590361 22:40:15 22:40:15.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 22:40:15 22:40:15.236 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 32366590361 22:40:15 22:40:15.237 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 32990093386 22:40:15 22:40:15.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 22:40:15 22:40:15.238 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:15 22:40:15.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 33147991737 22:40:15 22:40:15.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 22:40:15 22:40:15.239 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 22:40:15 22:40:15.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.251 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 22:40:15 22:40:15.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 22:40:15 22:40:15.255 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 22:40:15 22:40:15.620 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.620 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 22:40:15 22:40:15.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:15 22:40:15.622 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a224f4235373661664a53787549726846396e5158616141227d,v{s{31,s{'world,'anyone}}},0 response:: 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33147991737 22:40:15 22:40:15.624 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 32825135571 22:40:15 22:40:15.625 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 22:40:15 22:40:15.625 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 22:40:15 22:40:15.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 34593113471 22:40:15 22:40:15.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 22:40:15 22:40:15.626 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:15 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 34593113471 22:40:15 22:40:15.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 34375139419 22:40:15 22:40:15.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 22:40:15 22:40:15.629 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 22:40:15 22:40:15.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 37525530129 22:40:15 22:40:15.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 22:40:15 22:40:15.630 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a224f4235373661664a53787549726846396e5158616141227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 22:40:15 22:40:15.632 [main] INFO kafka.server.KafkaServer - Cluster ID = OB576afJSxuIrhF9nQXaaA 22:40:15 22:40:15.636 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit3067120233997490679/meta.properties 22:40:15 22:40:15.648 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 22:40:15 22:40:15.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 22:40:15 22:40:15.649 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 22:40:15 22:40:15.699 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.700 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 22:40:15 22:40:15.700 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 22:40:15 22:40:15.700 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 22:40:15 22:40:15.703 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 22:40:15 advertised.listeners = SASL_PLAINTEXT://localhost:43439 22:40:15 alter.config.policy.class.name = null 22:40:15 alter.log.dirs.replication.quota.window.num = 11 22:40:15 alter.log.dirs.replication.quota.window.size.seconds = 1 22:40:15 authorizer.class.name = 22:40:15 auto.create.topics.enable = true 22:40:15 auto.leader.rebalance.enable = true 22:40:15 background.threads = 10 22:40:15 broker.heartbeat.interval.ms = 2000 22:40:15 broker.id = 1 22:40:15 broker.id.generation.enable = true 22:40:15 broker.rack = null 22:40:15 broker.session.timeout.ms = 9000 22:40:15 client.quota.callback.class = null 22:40:15 compression.type = producer 22:40:15 connection.failed.authentication.delay.ms = 100 22:40:15 connections.max.idle.ms = 600000 22:40:15 connections.max.reauth.ms = 0 22:40:15 control.plane.listener.name = null 22:40:15 controlled.shutdown.enable = true 22:40:15 controlled.shutdown.max.retries = 3 22:40:15 controlled.shutdown.retry.backoff.ms = 5000 22:40:15 controller.listener.names = null 22:40:15 controller.quorum.append.linger.ms = 25 22:40:15 controller.quorum.election.backoff.max.ms = 1000 22:40:15 controller.quorum.election.timeout.ms = 1000 22:40:15 controller.quorum.fetch.timeout.ms = 2000 22:40:15 controller.quorum.request.timeout.ms = 2000 22:40:15 controller.quorum.retry.backoff.ms = 20 22:40:15 controller.quorum.voters = [] 22:40:15 controller.quota.window.num = 11 22:40:15 controller.quota.window.size.seconds = 1 22:40:15 controller.socket.timeout.ms = 30000 22:40:15 create.topic.policy.class.name = null 22:40:15 default.replication.factor = 1 22:40:15 delegation.token.expiry.check.interval.ms = 3600000 22:40:15 delegation.token.expiry.time.ms = 86400000 22:40:15 delegation.token.master.key = null 22:40:15 delegation.token.max.lifetime.ms = 604800000 22:40:15 delegation.token.secret.key = null 22:40:15 delete.records.purgatory.purge.interval.requests = 1 22:40:15 delete.topic.enable = true 22:40:15 early.start.listeners = null 22:40:15 fetch.max.bytes = 57671680 22:40:15 fetch.purgatory.purge.interval.requests = 1000 22:40:15 group.initial.rebalance.delay.ms = 3000 22:40:15 group.max.session.timeout.ms = 1800000 22:40:15 group.max.size = 2147483647 22:40:15 group.min.session.timeout.ms = 6000 22:40:15 initial.broker.registration.timeout.ms = 60000 22:40:15 inter.broker.listener.name = null 22:40:15 inter.broker.protocol.version = 3.3-IV3 22:40:15 kafka.metrics.polling.interval.secs = 10 22:40:15 kafka.metrics.reporters = [] 22:40:15 leader.imbalance.check.interval.seconds = 300 22:40:15 leader.imbalance.per.broker.percentage = 10 22:40:15 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 22:40:15 listeners = SASL_PLAINTEXT://localhost:43439 22:40:15 log.cleaner.backoff.ms = 15000 22:40:15 log.cleaner.dedupe.buffer.size = 134217728 22:40:15 log.cleaner.delete.retention.ms = 86400000 22:40:15 log.cleaner.enable = true 22:40:15 log.cleaner.io.buffer.load.factor = 0.9 22:40:15 log.cleaner.io.buffer.size = 524288 22:40:15 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 22:40:15 log.cleaner.max.compaction.lag.ms = 9223372036854775807 22:40:15 log.cleaner.min.cleanable.ratio = 0.5 22:40:15 log.cleaner.min.compaction.lag.ms = 0 22:40:15 log.cleaner.threads = 1 22:40:15 log.cleanup.policy = [delete] 22:40:15 log.dir = /tmp/kafka-unit3067120233997490679 22:40:15 log.dirs = null 22:40:15 log.flush.interval.messages = 1 22:40:15 log.flush.interval.ms = null 22:40:15 log.flush.offset.checkpoint.interval.ms = 60000 22:40:15 log.flush.scheduler.interval.ms = 9223372036854775807 22:40:15 log.flush.start.offset.checkpoint.interval.ms = 60000 22:40:15 log.index.interval.bytes = 4096 22:40:15 log.index.size.max.bytes = 10485760 22:40:15 log.message.downconversion.enable = true 22:40:15 log.message.format.version = 3.0-IV1 22:40:15 log.message.timestamp.difference.max.ms = 9223372036854775807 22:40:15 log.message.timestamp.type = CreateTime 22:40:15 log.preallocate = false 22:40:15 log.retention.bytes = -1 22:40:15 log.retention.check.interval.ms = 300000 22:40:15 log.retention.hours = 168 22:40:15 log.retention.minutes = null 22:40:15 log.retention.ms = null 22:40:15 log.roll.hours = 168 22:40:15 log.roll.jitter.hours = 0 22:40:15 log.roll.jitter.ms = null 22:40:15 log.roll.ms = null 22:40:15 log.segment.bytes = 1073741824 22:40:15 log.segment.delete.delay.ms = 60000 22:40:15 max.connection.creation.rate = 2147483647 22:40:15 max.connections = 2147483647 22:40:15 max.connections.per.ip = 2147483647 22:40:15 max.connections.per.ip.overrides = 22:40:15 max.incremental.fetch.session.cache.slots = 1000 22:40:15 message.max.bytes = 1048588 22:40:15 metadata.log.dir = null 22:40:15 metadata.log.max.record.bytes.between.snapshots = 20971520 22:40:15 metadata.log.segment.bytes = 1073741824 22:40:15 metadata.log.segment.min.bytes = 8388608 22:40:15 metadata.log.segment.ms = 604800000 22:40:15 metadata.max.idle.interval.ms = 500 22:40:15 metadata.max.retention.bytes = -1 22:40:15 metadata.max.retention.ms = 604800000 22:40:15 metric.reporters = [] 22:40:15 metrics.num.samples = 2 22:40:15 metrics.recording.level = INFO 22:40:15 metrics.sample.window.ms = 30000 22:40:15 min.insync.replicas = 1 22:40:15 node.id = 1 22:40:15 num.io.threads = 2 22:40:15 num.network.threads = 2 22:40:15 num.partitions = 1 22:40:15 num.recovery.threads.per.data.dir = 1 22:40:15 num.replica.alter.log.dirs.threads = null 22:40:15 num.replica.fetchers = 1 22:40:15 offset.metadata.max.bytes = 4096 22:40:15 offsets.commit.required.acks = -1 22:40:15 offsets.commit.timeout.ms = 5000 22:40:15 offsets.load.buffer.size = 5242880 22:40:15 offsets.retention.check.interval.ms = 600000 22:40:15 offsets.retention.minutes = 10080 22:40:15 offsets.topic.compression.codec = 0 22:40:15 offsets.topic.num.partitions = 50 22:40:15 offsets.topic.replication.factor = 1 22:40:15 offsets.topic.segment.bytes = 104857600 22:40:15 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 22:40:15 password.encoder.iterations = 4096 22:40:15 password.encoder.key.length = 128 22:40:15 password.encoder.keyfactory.algorithm = null 22:40:15 password.encoder.old.secret = null 22:40:15 password.encoder.secret = null 22:40:15 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 22:40:15 process.roles = [] 22:40:15 producer.purgatory.purge.interval.requests = 1000 22:40:15 queued.max.request.bytes = -1 22:40:15 queued.max.requests = 500 22:40:15 quota.window.num = 11 22:40:15 quota.window.size.seconds = 1 22:40:15 remote.log.index.file.cache.total.size.bytes = 1073741824 22:40:15 remote.log.manager.task.interval.ms = 30000 22:40:15 remote.log.manager.task.retry.backoff.max.ms = 30000 22:40:15 remote.log.manager.task.retry.backoff.ms = 500 22:40:15 remote.log.manager.task.retry.jitter = 0.2 22:40:15 remote.log.manager.thread.pool.size = 10 22:40:15 remote.log.metadata.manager.class.name = null 22:40:15 remote.log.metadata.manager.class.path = null 22:40:15 remote.log.metadata.manager.impl.prefix = null 22:40:15 remote.log.metadata.manager.listener.name = null 22:40:15 remote.log.reader.max.pending.tasks = 100 22:40:15 remote.log.reader.threads = 10 22:40:15 remote.log.storage.manager.class.name = null 22:40:15 remote.log.storage.manager.class.path = null 22:40:15 remote.log.storage.manager.impl.prefix = null 22:40:15 remote.log.storage.system.enable = false 22:40:15 replica.fetch.backoff.ms = 1000 22:40:15 replica.fetch.max.bytes = 1048576 22:40:15 replica.fetch.min.bytes = 1 22:40:15 replica.fetch.response.max.bytes = 10485760 22:40:15 replica.fetch.wait.max.ms = 500 22:40:15 replica.high.watermark.checkpoint.interval.ms = 5000 22:40:15 replica.lag.time.max.ms = 30000 22:40:15 replica.selector.class = null 22:40:15 replica.socket.receive.buffer.bytes = 65536 22:40:15 replica.socket.timeout.ms = 30000 22:40:15 replication.quota.window.num = 11 22:40:15 replication.quota.window.size.seconds = 1 22:40:15 request.timeout.ms = 30000 22:40:15 reserved.broker.max.id = 1000 22:40:15 sasl.client.callback.handler.class = null 22:40:15 sasl.enabled.mechanisms = [PLAIN] 22:40:15 sasl.jaas.config = null 22:40:15 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:15 sasl.kerberos.min.time.before.relogin = 60000 22:40:15 sasl.kerberos.principal.to.local.rules = [DEFAULT] 22:40:15 sasl.kerberos.service.name = null 22:40:15 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:15 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:15 sasl.login.callback.handler.class = null 22:40:15 sasl.login.class = null 22:40:15 sasl.login.connect.timeout.ms = null 22:40:15 sasl.login.read.timeout.ms = null 22:40:15 sasl.login.refresh.buffer.seconds = 300 22:40:15 sasl.login.refresh.min.period.seconds = 60 22:40:15 sasl.login.refresh.window.factor = 0.8 22:40:15 sasl.login.refresh.window.jitter = 0.05 22:40:15 sasl.login.retry.backoff.max.ms = 10000 22:40:15 sasl.login.retry.backoff.ms = 100 22:40:15 sasl.mechanism.controller.protocol = GSSAPI 22:40:15 sasl.mechanism.inter.broker.protocol = PLAIN 22:40:15 sasl.oauthbearer.clock.skew.seconds = 30 22:40:15 sasl.oauthbearer.expected.audience = null 22:40:15 sasl.oauthbearer.expected.issuer = null 22:40:15 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:15 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:15 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:15 sasl.oauthbearer.jwks.endpoint.url = null 22:40:15 sasl.oauthbearer.scope.claim.name = scope 22:40:15 sasl.oauthbearer.sub.claim.name = sub 22:40:15 sasl.oauthbearer.token.endpoint.url = null 22:40:15 sasl.server.callback.handler.class = null 22:40:15 sasl.server.max.receive.size = 524288 22:40:15 security.inter.broker.protocol = SASL_PLAINTEXT 22:40:15 security.providers = null 22:40:15 socket.connection.setup.timeout.max.ms = 30000 22:40:15 socket.connection.setup.timeout.ms = 10000 22:40:15 socket.listen.backlog.size = 50 22:40:15 socket.receive.buffer.bytes = 102400 22:40:15 socket.request.max.bytes = 104857600 22:40:15 socket.send.buffer.bytes = 102400 22:40:15 ssl.cipher.suites = [] 22:40:15 ssl.client.auth = none 22:40:15 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:15 ssl.endpoint.identification.algorithm = https 22:40:15 ssl.engine.factory.class = null 22:40:15 ssl.key.password = null 22:40:15 ssl.keymanager.algorithm = SunX509 22:40:15 ssl.keystore.certificate.chain = null 22:40:15 ssl.keystore.key = null 22:40:15 ssl.keystore.location = null 22:40:15 ssl.keystore.password = null 22:40:15 ssl.keystore.type = JKS 22:40:15 ssl.principal.mapping.rules = DEFAULT 22:40:15 ssl.protocol = TLSv1.3 22:40:15 ssl.provider = null 22:40:15 ssl.secure.random.implementation = null 22:40:15 ssl.trustmanager.algorithm = PKIX 22:40:15 ssl.truststore.certificates = null 22:40:15 ssl.truststore.location = null 22:40:15 ssl.truststore.password = null 22:40:15 ssl.truststore.type = JKS 22:40:15 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 22:40:15 transaction.max.timeout.ms = 900000 22:40:15 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 22:40:15 transaction.state.log.load.buffer.size = 5242880 22:40:15 transaction.state.log.min.isr = 1 22:40:15 transaction.state.log.num.partitions = 4 22:40:15 transaction.state.log.replication.factor = 1 22:40:15 transaction.state.log.segment.bytes = 104857600 22:40:15 transactional.id.expiration.ms = 604800000 22:40:15 unclean.leader.election.enable = false 22:40:15 zookeeper.clientCnxnSocket = null 22:40:15 zookeeper.connect = 127.0.0.1:36225 22:40:15 zookeeper.connection.timeout.ms = null 22:40:15 zookeeper.max.in.flight.requests = 10 22:40:15 zookeeper.session.timeout.ms = 30000 22:40:15 zookeeper.set.acl = false 22:40:15 zookeeper.ssl.cipher.suites = null 22:40:15 zookeeper.ssl.client.enable = false 22:40:15 zookeeper.ssl.crl.enable = false 22:40:15 zookeeper.ssl.enabled.protocols = null 22:40:15 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 22:40:15 zookeeper.ssl.keystore.location = null 22:40:15 zookeeper.ssl.keystore.password = null 22:40:15 zookeeper.ssl.keystore.type = null 22:40:15 zookeeper.ssl.ocsp.enable = false 22:40:15 zookeeper.ssl.protocol = TLSv1.2 22:40:15 zookeeper.ssl.truststore.location = null 22:40:15 zookeeper.ssl.truststore.password = null 22:40:15 zookeeper.ssl.truststore.type = null 22:40:15 22:40:15 22:40:15.706 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 22:40:15 22:40:15.768 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 22:40:15 22:40:15.771 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 22:40:15 22:40:15.777 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 22:40:15 22:40:15.777 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 22:40:15 22:40:15.820 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:15 22:40:15.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:15 22:40:15.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:15 22:40:15.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:15 ] 22:40:15 22:40:15.820 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:15 , 'ip,'127.0.0.1 22:40:15 ] 22:40:15 22:40:15.823 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1731192015151,1731192015151,0,0,0,0,0,0,6} 22:40:15 22:40:15.826 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit3067120233997490679) 22:40:15 22:40:15.830 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit3067120233997490679 since no clean shutdown file was found 22:40:15 22:40:15.835 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 22:40:15 22:40:15.839 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 22:40:15 22:40:15.842 [main] INFO kafka.log.LogManager - Loaded 0 logs in 16ms. 22:40:15 22:40:15.842 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 22:40:15 22:40:15.844 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 22:40:15 22:40:15.845 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 22:40:15 22:40:15.845 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 22:40:15 22:40:15.845 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 22:40:15 22:40:15.846 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 22:40:15 22:40:15.846 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 22:40:15 22:40:15.860 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 22:40:15 22:40:15.909 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 22:40:15 22:40:15.932 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 22:40:15 22:40:15.938 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:15 22:40:15.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:15 22:40:15.940 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 22:40:15 22:40:15.946 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 22:40:15 22:40:15.947 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:15 22:40:15.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:15 22:40:15.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:15 22:40:15.948 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 22:40:15 22:40:15.952 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 22:40:15 22:40:15.975 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 22:40:16 22:40:16.008 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 22:40:16 22:40:16.009 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.011 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.124 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.125 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.225 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.226 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.326 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.327 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.427 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.428 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.528 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.528 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.614 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 22:40:16 22:40:16.618 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:43439. 22:40:16 22:40:16.629 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.629 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.707 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 22:40:16 22:40:16.746 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.746 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.748 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 22:40:16 22:40:16.748 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.748 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.787 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 22:40:16 22:40:16.791 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 22:40:16 22:40:16.798 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 22:40:16 22:40:16.801 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 22:40:16 22:40:16.814 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 22:40:16 22:40:16.815 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 22:40:16 22:40:16.821 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:16 22:40:16.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 22:40:16 22:40:16.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 22:40:16 22:40:16.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:16 22:40:16.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:16 ] 22:40:16 22:40:16.821 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:16 , 'ip,'127.0.0.1 22:40:16 ] 22:40:16 22:40:16.822 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1731192015147,1731192015147,0,0,0,0,0,0,5} 22:40:16 22:40:16.825 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 22:40:16 22:40:16.847 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.848 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.849 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.850 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.860 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 22:40:16 22:40:16.873 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:16 22:40:16.874 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:16 22:40:16 22:40:16.874 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:16 22:40:16.874 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:16 ] 22:40:16 22:40:16.874 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:16 , 'ip,'127.0.0.1 22:40:16 ] 22:40:16 22:40:16.875 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 37525530129 22:40:16 22:40:16.875 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 37304107992 22:40:16 22:40:16.876 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:16 22:40:16.877 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 22:40:16 22:40:16.877 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:16 ] 22:40:16 22:40:16.877 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:16 , 'ip,'127.0.0.1 22:40:16 ] 22:40:16 22:40:16.878 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41355073538 22:40:16 22:40:16.879 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 38939314611 22:40:16 22:40:16.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 22:40:16 22:40:16.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:16 22:40:16.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:16 22:40:16.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 38939314611 22:40:16 22:40:16.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 22:40:16 22:40:16.885 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@ffaeb3a8 response:: org.apache.zookeeper.MultiResponse@1dbbce85 22:40:16 22:40:16.889 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1731192016873,1731192016873,1,0,0,72057601724710912,209,0,25 22:40:16 22:40:16 22:40:16.889 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:43439, czxid (broker epoch): 25 22:40:16 22:40:16.949 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.949 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.950 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:16 22:40:16.950 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:16 22:40:16.993 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 22:40:17 22:40:17.012 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 22:40:17 22:40:17.012 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.013 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 22:40:17 22:40:17.019 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 22:40:17 22:40:17.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.020 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.020 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 22:40:17 22:40:17.022 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.023 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 22:40:17 22:40:17.030 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 22:40:17 22:40:17.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 22:40:17 22:40:17.031 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 22:40:17 22:40:17.034 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 38939314611 22:40:17 22:40:17.035 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42687771912 22:40:17 22:40:17.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 22:40:17 22:40:17.036 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 22:40:17 22:40:17.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 42842500021 22:40:17 22:40:17.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 22:40:17 22:40:17.037 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 22:40:17 22:40:17.039 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 22:40:17 22:40:17.040 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 22:40:17 22:40:17.042 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.042 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 42842500021 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 42627134182 22:40:17 22:40:17.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.047 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 22:40:17 22:40:17.047 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.047 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.047 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46477109360 22:40:17 22:40:17.047 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 47376413898 22:40:17 22:40:17.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 22:40:17 22:40:17.048 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 22:40:17 22:40:17.050 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.050 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.051 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 22:40:17 22:40:17.051 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 47376413898 22:40:17 22:40:17.051 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 22:40:17 22:40:17.051 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:17 22:40:17.051 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000001ca2b0000 22:40:17 22:40:17.051 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 38,14 replyHeader:: 38,27,0 request:: org.apache.zookeeper.MultiOperationRecord@689bc527 response:: org.apache.zookeeper.MultiResponse@f3584fa6 22:40:17 22:40:17.052 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.052 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.052 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 22:40:17 22:40:17.054 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 22:40:17 22:40:17.056 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.056 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 39,4 replyHeader:: 39,27,-101 request:: '/feature,T response:: 22:40:17 22:40:17.059 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 22:40:17 22:40:17.060 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.060 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.060 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.061 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.061 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.061 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 47376413898 22:40:17 22:40:17.061 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44563899668 22:40:17 22:40:17.061 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 22:40:17 22:40:17.062 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 22:40:17 22:40:17.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 48020535595 22:40:17 22:40:17.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 22:40:17 22:40:17.062 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:17 22:40:17.062 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000001ca2b0000 22:40:17 22:40:17.062 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,1 replyHeader:: 40,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 22:40:17 22:40:17.062 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 22:40:17 22:40:17.063 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 22:40:17 22:40:17.063 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 22:40:17 22:40:17.063 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.064 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 41,4 replyHeader:: 41,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1731192017060,1731192017060,0,0,0,0,38,0,28} 22:40:17 22:40:17.064 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 22:40:17 22:40:17.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.065 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1731192017060,1731192017060,0,0,0,0,38,0,28} 22:40:17 22:40:17.071 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 22:40:17 22:40:17.074 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:17 22:40:17.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:17 22:40:17.075 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 43,4 replyHeader:: 43,28,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 22:40:17 22:40:17.078 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 22:40:17 22:40:17.078 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 22:40:17 22:40:17.083 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 22:40:17 22:40:17.121 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 22:40:17 22:40:17.128 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 22:40:17 22:40:17.127 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 22:40:17 22:40:17.127 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 22:40:17 22:40:17.129 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 22:40:17 22:40:17.131 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.132 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.132 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 44,3 replyHeader:: 44,28,-101 request:: '/admin/preferred_replica_election,T response:: 22:40:17 22:40:17.132 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 22:40:17 22:40:17.132 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 22:40:17 22:40:17.133 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 45,4 replyHeader:: 45,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 22:40:17 22:40:17.133 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.133 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 22:40:17 22:40:17.133 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 22:40:17 22:40:17.133 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 22:40:17 22:40:17.134 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 22:40:17 22:40:17.134 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 22:40:17 22:40:17.134 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 22:40:17 22:40:17.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 22:40:17 22:40:17.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.135 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1731192015221,1731192015221,0,0,0,0,0,0,16} 22:40:17 22:40:17.137 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 22:40:17 22:40:17.137 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 22:40:17 22:40:17.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 22:40:17 22:40:17.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.138 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1731192015213,1731192015213,0,0,0,0,0,0,14} 22:40:17 22:40:17.139 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 22:40:17 22:40:17.139 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 22:40:17 22:40:17.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 22:40:17 22:40:17.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.140 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 22:40:17 22:40:17.140 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1731192015147,1731192015147,0,1,0,0,0,1,25} 22:40:17 22:40:17.141 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 22:40:17 22:40:17.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 22:40:17 22:40:17.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.142 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3433343339225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373331313932303136383333227d,s{25,25,1731192016873,1731192016873,1,0,0,72057601724710912,209,0,25} 22:40:17 22:40:17.145 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 22:40:17 22:40:17.151 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.151 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.153 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.153 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.177 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 22:40:17 22:40:17.178 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:17 22:40:17.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:17 22:40:17.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.179 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.179 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1731192015151,1731192015151,0,0,0,0,0,0,6} 22:40:17 22:40:17.192 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 22:40:17 22:40:17.194 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 22:40:17 22:40:17.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 22:40:17 22:40:17.195 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1731192016873,1731192016873,1,0,0,72057601724710912,209,0,25} 22:40:17 22:40:17.199 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 22:40:17 22:40:17.225 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 22:40:17 22:40:17.225 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 22:40:17 22:40:17.225 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 22:40:17 22:40:17.225 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 22:40:17 22:40:17.227 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 22:40:17 22:40:17.227 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.227 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 22:40:17 22:40:17.227 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 22:40:17 22:40:17.227 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.227 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.227 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.228 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1731192015173,1731192015173,0,0,0,0,0,0,12} 22:40:17 22:40:17.230 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 22:40:17 22:40:17.230 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 22:40:17 22:40:17.230 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 22:40:17 22:40:17.230 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 22:40:17 22:40:17.231 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 22:40:17 22:40:17.242 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 22:40:17 22:40:17.254 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:17 22:40:17.254 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:17 22:40:17.255 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.255 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.256 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.256 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.262 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 22:40:17 22:40:17.262 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 22:40:17 22:40:17.275 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 22:40:17 22:40:17.275 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 22:40:17 22:40:17.275 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 22:40:17 22:40:17.276 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 22:40:17 22:40:17.276 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:17 22:40:17.277 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:17 22:40:17.285 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 22:40:17 22:40:17.285 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 22:40:17 22:40:17.287 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 22:40:17 22:40:17.290 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.291 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 22:40:17 22:40:17.291 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 22:40:17 22:40:17.291 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 54,3 replyHeader:: 54,28,-101 request:: '/admin/reassign_partitions,T response:: 22:40:17 22:40:17.305 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.306 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 55,4 replyHeader:: 55,28,-101 request:: '/admin/preferred_replica_election,T response:: 22:40:17 22:40:17.307 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 22:40:17 22:40:17.308 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 22:40:17 22:40:17.308 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 22:40:17 22:40:17.308 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 22:40:17 22:40:17.310 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 22:40:17 22:40:17.333 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 22:40:17 22:40:17.335 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.335 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.335 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.335 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.337 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48020535595 22:40:17 22:40:17.337 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.337 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 22:40:17 22:40:17.337 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.337 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.338 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48020535595 22:40:17 22:40:17.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 22:40:17 22:40:17.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:17 22:40:17.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 22:40:17 22:40:17.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 48020535595 22:40:17 22:40:17.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 22:40:17 22:40:17.347 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 56,14 replyHeader:: 56,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 22:40:17 22:40:17.353 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:17 22:40:17.355 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 22:40:17 22:40:17.355 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 22:40:17 22:40:17.355 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 22:40:17 22:40:17.356 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.356 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.356 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.356 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.359 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 22:40:17 22:40:17.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.368 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 57,3 replyHeader:: 57,29,0 request:: '/controller,T response:: s{27,27,1731192017042,1731192017042,0,0,0,72057601724710912,54,0,27} 22:40:17 22:40:17.369 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 22:40:17 22:40:17.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.370 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 58,4 replyHeader:: 58,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373331313932303137303239227d,s{27,27,1731192017042,1731192017042,0,0,0,72057601724710912,54,0,27} 22:40:17 22:40:17.377 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 22:40:17 22:40:17.377 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 59,3 replyHeader:: 59,29,-101 request:: '/admin/preferred_replica_election,T response:: 22:40:17 22:40:17.392 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 22:40:17 22:40:17.392 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 22:40:17 22:40:17.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 22:40:17 22:40:17.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.393 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 60,12 replyHeader:: 60,29,0 request:: '/config/topics,F response:: v{},s{17,17,1731192015224,1731192015224,0,0,0,0,0,0,17} 22:40:17 22:40:17.396 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.396 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 22:40:17 22:40:17.396 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 22:40:17 22:40:17.396 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.396 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.397 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.397 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 61,12 replyHeader:: 61,29,0 request:: '/config/changes,T response:: v{},s{9,9,1731192015161,1731192015161,0,0,0,0,0,0,9} 22:40:17 22:40:17.398 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 62,12 replyHeader:: 62,29,0 request:: '/config/clients,F response:: v{},s{18,18,1731192015228,1731192015228,0,0,0,0,0,0,18} 22:40:17 22:40:17.399 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 22:40:17 22:40:17.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 22:40:17 22:40:17.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.399 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 63,12 replyHeader:: 63,29,0 request:: '/config/users,F response:: v{},s{19,19,1731192015231,1731192015231,0,0,0,0,0,0,19} 22:40:17 22:40:17.400 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 22:40:17 22:40:17.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 22:40:17 22:40:17.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.401 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 64,12 replyHeader:: 64,29,0 request:: '/config/users,F response:: v{},s{19,19,1731192015231,1731192015231,0,0,0,0,0,0,19} 22:40:17 22:40:17.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 22:40:17 22:40:17.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 22:40:17 22:40:17.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.406 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 65,12 replyHeader:: 65,29,0 request:: '/config/ips,F response:: v{},s{21,21,1731192015237,1731192015237,0,0,0,0,0,0,21} 22:40:17 22:40:17.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 22:40:17 22:40:17.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 22:40:17 22:40:17.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.408 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 66,12 replyHeader:: 66,29,0 request:: '/config/brokers,F response:: v{},s{20,20,1731192015234,1731192015234,0,0,0,0,0,0,20} 22:40:17 22:40:17.408 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 22:40:17 22:40:17.410 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 22:40:17 22:40:17.419 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 22:40:17 22:40:17.421 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:17 22:40:17.421 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:17 22:40:17.421 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192017420 22:40:17 22:40:17.422 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 22:40:17 22:40:17.442 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:47524 22:40:17 22:40:17.444 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47524 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:17 22:40:17.458 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:17 22:40:17.458 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:17 22:40:17.458 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.458 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.458 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.458 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.459 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 22:40:17 bootstrap.servers = [SASL_PLAINTEXT://localhost:43439] 22:40:17 client.dns.lookup = use_all_dns_ips 22:40:17 client.id = test-consumer-id 22:40:17 connections.max.idle.ms = 300000 22:40:17 default.api.timeout.ms = 60000 22:40:17 metadata.max.age.ms = 300000 22:40:17 metric.reporters = [] 22:40:17 metrics.num.samples = 2 22:40:17 metrics.recording.level = INFO 22:40:17 metrics.sample.window.ms = 30000 22:40:17 receive.buffer.bytes = 65536 22:40:17 reconnect.backoff.max.ms = 1000 22:40:17 reconnect.backoff.ms = 50 22:40:17 request.timeout.ms = 15000 22:40:17 retries = 2147483647 22:40:17 retry.backoff.ms = 100 22:40:17 sasl.client.callback.handler.class = null 22:40:17 sasl.jaas.config = [hidden] 22:40:17 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:17 sasl.kerberos.min.time.before.relogin = 60000 22:40:17 sasl.kerberos.service.name = null 22:40:17 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:17 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:17 sasl.login.callback.handler.class = null 22:40:17 sasl.login.class = null 22:40:17 sasl.login.connect.timeout.ms = null 22:40:17 sasl.login.read.timeout.ms = null 22:40:17 sasl.login.refresh.buffer.seconds = 300 22:40:17 sasl.login.refresh.min.period.seconds = 60 22:40:17 sasl.login.refresh.window.factor = 0.8 22:40:17 sasl.login.refresh.window.jitter = 0.05 22:40:17 sasl.login.retry.backoff.max.ms = 10000 22:40:17 sasl.login.retry.backoff.ms = 100 22:40:17 sasl.mechanism = PLAIN 22:40:17 sasl.oauthbearer.clock.skew.seconds = 30 22:40:17 sasl.oauthbearer.expected.audience = null 22:40:17 sasl.oauthbearer.expected.issuer = null 22:40:17 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:17 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:17 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:17 sasl.oauthbearer.jwks.endpoint.url = null 22:40:17 sasl.oauthbearer.scope.claim.name = scope 22:40:17 sasl.oauthbearer.sub.claim.name = sub 22:40:17 sasl.oauthbearer.token.endpoint.url = null 22:40:17 security.protocol = SASL_PLAINTEXT 22:40:17 security.providers = null 22:40:17 send.buffer.bytes = 131072 22:40:17 socket.connection.setup.timeout.max.ms = 30000 22:40:17 socket.connection.setup.timeout.ms = 10000 22:40:17 ssl.cipher.suites = null 22:40:17 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:17 ssl.endpoint.identification.algorithm = https 22:40:17 ssl.engine.factory.class = null 22:40:17 ssl.key.password = null 22:40:17 ssl.keymanager.algorithm = SunX509 22:40:17 ssl.keystore.certificate.chain = null 22:40:17 ssl.keystore.key = null 22:40:17 ssl.keystore.location = null 22:40:17 ssl.keystore.password = null 22:40:17 ssl.keystore.type = JKS 22:40:17 ssl.protocol = TLSv1.3 22:40:17 ssl.provider = null 22:40:17 ssl.secure.random.implementation = null 22:40:17 ssl.trustmanager.algorithm = PKIX 22:40:17 ssl.truststore.certificates = null 22:40:17 ssl.truststore.location = null 22:40:17 ssl.truststore.password = null 22:40:17 ssl.truststore.type = JKS 22:40:17 22:40:17 22:40:17.493 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:43439 (id: -1 rack: null)], partitions = [], controller = null). 22:40:17 22:40:17.496 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 22:40:17 22:40:17.504 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:17 22:40:17.507 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:17 22:40:17.508 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:17 22:40:17.509 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:17 22:40:17.510 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:17 22:40:17.510 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:17 22:40:17.510 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192017510 22:40:17 22:40:17.510 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 22:40:17 22:40:17.511 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:17 22:40:17.512 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 22:40:17 22:40:17.515 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 22:40:17 22:40:17.517 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 22:40:17 22:40:17.520 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:17 22:40:17.521 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:17 22:40:17.521 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1731192077519, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 22:40:17 22:40:17.522 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:17 22:40:17.522 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:17 22:40:17.525 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:17 22:40:17.522 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 22:40:17 22:40:17.525 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:43439 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:17 22:40:17.525 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 22:40:17 22:40:17.525 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 22:40:17 22:40:17.526 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:17 22:40:17.526 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:17 22:40:17.526 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47534 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:17 22:40:17.526 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:43439 (id: 1 rack: null) for sending state change requests 22:40:17 22:40:17.531 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=43439, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 22:40:17 22:40:17.540 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:47534 22:40:17 22:40:17.542 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 22:40:17 22:40:17.542 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:17 22:40:17.542 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 22:40:17 22:40:17.542 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:17 22:40:17.542 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:17 22:40:17.544 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:17 22:40:17.545 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:17 22:40:17.545 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:17 22:40:17.546 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:17 22:40:17.546 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:17 22:40:17.546 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 22:40:17 22:40:17.546 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 22:40:17 22:40:17.546 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:17 22:40:17.547 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:17 22:40:17.547 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:17 22:40:17.547 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:17 22:40:17.547 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 22:40:17 22:40:17.547 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 22:40:17 22:40:17.547 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 22:40:17 22:40:17.547 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 22:40:17 22:40:17.547 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:17 22:40:17.559 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.559 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.559 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.559 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 22:40:17 22:40:17.568 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:17 22:40:17.569 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 22:40:17 22:40:17.573 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:17 22:40:17.574 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:43439 (id: -1 rack: null). correlationId=1, timeoutMs=14946 22:40:17 22:40:17.575 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14946 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:17 22:40:17.605 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":43439,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47524-0","totalTimeMs":36.165,"requestQueueTimeMs":22.959,"localTimeMs":12.274,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.178,"sendTimeMs":0.752,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:17 22:40:17.607 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47534-0","totalTimeMs":19.423,"requestQueueTimeMs":8.101,"localTimeMs":7.838,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.131,"sendTimeMs":3.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:17 22:40:17.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[]},"connection":"127.0.0.1:43439-127.0.0.1:47534-0","totalTimeMs":14.313,"requestQueueTimeMs":3.069,"localTimeMs":10.921,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.106,"sendTimeMs":0.215,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:17 22:40:17.623 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 22:40:17 22:40:17.626 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = OB576afJSxuIrhF9nQXaaA, nodes = [localhost:43439 (id: 1 rack: null)], partitions = [], controller = localhost:43439 (id: 1 rack: null)) 22:40:17 22:40:17.626 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:17 22:40:17.626 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:17 22:40:17.626 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:17 22:40:17.626 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:17 22:40:17.627 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47548 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:17 22:40:17.630 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:47548 22:40:17 22:40:17.631 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 22:40:17 22:40:17.631 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:17 22:40:17.631 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:17 22:40:17.631 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:17 22:40:17.631 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 22:40:17 22:40:17.633 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:17 22:40:17.633 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:17 22:40:17.633 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:17 22:40:17.633 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:17 22:40:17.633 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:17 22:40:17.633 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 22:40:17 22:40:17.634 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 22:40:17 22:40:17.634 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:17 22:40:17.635 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:17 22:40:17.635 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:17 22:40:17.635 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:17 22:40:17.635 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 22:40:17 22:40:17.635 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 22:40:17 22:40:17.635 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 22:40:17 22:40:17.635 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 22:40:17 22:40:17.635 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:17 22:40:17.640 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:17 22:40:17.641 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:17 22:40:17.641 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:43439 (id: 1 rack: null). correlationId=3, timeoutMs=14982 22:40:17 22:40:17.641 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14982 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 22:40:17 22:40:17.643 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47548-1","totalTimeMs":4.964,"requestQueueTimeMs":0.67,"localTimeMs":1.651,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.261,"sendTimeMs":2.379,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:17 22:40:17.652 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=43439, rack=null)], clusterAuthorizedOperations=-2147483648) 22:40:17 22:40:17.652 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 22:40:17 22:40:17.652 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 22:40:17 22:40:17.653 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 22:40:17 22:40:17.653 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":43439,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:43439-127.0.0.1:47548-1","totalTimeMs":8.096,"requestQueueTimeMs":0.641,"localTimeMs":6.801,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.251,"sendTimeMs":0.402,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:17 22:40:17.654 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:43439-127.0.0.1:47548-1) disconnected 22:40:17 java.io.EOFException: null 22:40:17 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:17 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:17 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:17 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:17 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:17 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:17 at kafka.network.Processor.poll(SocketServer.scala:1055) 22:40:17 at kafka.network.Processor.run(SocketServer.scala:959) 22:40:17 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:17 22:40:17.656 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 22:40:17 22:40:17.656 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 22:40:17 22:40:17.656 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 22:40:17 22:40:17.656 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 22:40:17 22:40:17.656 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 22:40:17 22:40:17.656 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 22:40:17 22:40:17.656 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:43439 22:40:17 22:40:17.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:43439-127.0.0.1:47534-0) disconnected 22:40:17 java.io.EOFException: null 22:40:17 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:17 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:17 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:17 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:17 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:17 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:17 at kafka.network.Processor.poll(SocketServer.scala:1055) 22:40:17 at kafka.network.Processor.run(SocketServer.scala:959) 22:40:17 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:17 22:40:17.657 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 22:40:17 bootstrap.servers = [SASL_PLAINTEXT://localhost:43439] 22:40:17 client.dns.lookup = use_all_dns_ips 22:40:17 client.id = test-consumer-id 22:40:17 connections.max.idle.ms = 300000 22:40:17 default.api.timeout.ms = 60000 22:40:17 metadata.max.age.ms = 300000 22:40:17 metric.reporters = [] 22:40:17 metrics.num.samples = 2 22:40:17 metrics.recording.level = INFO 22:40:17 metrics.sample.window.ms = 30000 22:40:17 receive.buffer.bytes = 65536 22:40:17 reconnect.backoff.max.ms = 1000 22:40:17 reconnect.backoff.ms = 50 22:40:17 request.timeout.ms = 15000 22:40:17 retries = 2147483647 22:40:17 retry.backoff.ms = 100 22:40:17 sasl.client.callback.handler.class = null 22:40:17 sasl.jaas.config = [hidden] 22:40:17 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:17 sasl.kerberos.min.time.before.relogin = 60000 22:40:17 sasl.kerberos.service.name = null 22:40:17 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:17 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:17 sasl.login.callback.handler.class = null 22:40:17 sasl.login.class = null 22:40:17 sasl.login.connect.timeout.ms = null 22:40:17 sasl.login.read.timeout.ms = null 22:40:17 sasl.login.refresh.buffer.seconds = 300 22:40:17 sasl.login.refresh.min.period.seconds = 60 22:40:17 sasl.login.refresh.window.factor = 0.8 22:40:17 sasl.login.refresh.window.jitter = 0.05 22:40:17 sasl.login.retry.backoff.max.ms = 10000 22:40:17 sasl.login.retry.backoff.ms = 100 22:40:17 sasl.mechanism = PLAIN 22:40:17 sasl.oauthbearer.clock.skew.seconds = 30 22:40:17 sasl.oauthbearer.expected.audience = null 22:40:17 sasl.oauthbearer.expected.issuer = null 22:40:17 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:17 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:17 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:17 sasl.oauthbearer.jwks.endpoint.url = null 22:40:17 sasl.oauthbearer.scope.claim.name = scope 22:40:17 sasl.oauthbearer.sub.claim.name = sub 22:40:17 sasl.oauthbearer.token.endpoint.url = null 22:40:17 security.protocol = SASL_PLAINTEXT 22:40:17 security.providers = null 22:40:17 send.buffer.bytes = 131072 22:40:17 socket.connection.setup.timeout.max.ms = 30000 22:40:17 socket.connection.setup.timeout.ms = 10000 22:40:17 ssl.cipher.suites = null 22:40:17 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:17 ssl.endpoint.identification.algorithm = https 22:40:17 ssl.engine.factory.class = null 22:40:17 ssl.key.password = null 22:40:17 ssl.keymanager.algorithm = SunX509 22:40:17 ssl.keystore.certificate.chain = null 22:40:17 ssl.keystore.key = null 22:40:17 ssl.keystore.location = null 22:40:17 ssl.keystore.password = null 22:40:17 ssl.keystore.type = JKS 22:40:17 ssl.protocol = TLSv1.3 22:40:17 ssl.provider = null 22:40:17 ssl.secure.random.implementation = null 22:40:17 ssl.trustmanager.algorithm = PKIX 22:40:17 ssl.truststore.certificates = null 22:40:17 ssl.truststore.location = null 22:40:17 ssl.truststore.password = null 22:40:17 ssl.truststore.type = JKS 22:40:17 22:40:17 22:40:17.657 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:43439 (id: -1 rack: null)], partitions = [], controller = null). 22:40:17 22:40:17.658 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 22:40:17 22:40:17.659 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.660 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:43439 (id: 1 rack: null) 22:40:17 22:40:17.665 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 22:40:17 22:40:17.665 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:43439 (id: 1 rack: null) 22:40:17 22:40:17.667 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:17 22:40:17.667 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:17 22:40:17.667 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192017667 22:40:17 22:40:17.667 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 22:40:17 22:40:17.671 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 22:40:17 22:40:17.672 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1731192077670, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 22:40:17 22:40:17.677 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Requesting metadata update. 22:40:17 22:40:17.678 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:17 22:40:17.679 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:43439 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:17 22:40:17.679 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:17 22:40:17.679 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:17 22:40:17.682 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47550 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:17 22:40:17.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:47550 22:40:17 22:40:17.682 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 22:40:17 22:40:17.683 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:17 22:40:17.683 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 22:40:17 22:40:17.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:17 22:40:17.683 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:17 22:40:17.684 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:17 22:40:17.684 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:17 22:40:17.684 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:17 22:40:17.684 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:17 22:40:17.684 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:17 22:40:17.685 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 22:40:17 22:40:17.685 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 22:40:17 22:40:17.685 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:17 22:40:17.685 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:17 22:40:17.685 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:17 22:40:17.685 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:17 22:40:17.685 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 22:40:17 22:40:17.686 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 22:40:17 22:40:17.686 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 22:40:17 22:40:17.686 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 22:40:17 22:40:17.686 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:17 22:40:17.689 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:17 22:40:17.691 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47550-1","totalTimeMs":1.549,"requestQueueTimeMs":0.282,"localTimeMs":0.953,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.223,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:17 22:40:17.691 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:17 22:40:17.691 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:43439 (id: -1 rack: null). correlationId=1, timeoutMs=14985 22:40:17 22:40:17.691 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14985 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = OB576afJSxuIrhF9nQXaaA, nodes = [localhost:43439 (id: 1 rack: null)], partitions = [], controller = localhost:43439 (id: 1 rack: null)) 22:40:17 22:40:17.693 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[]},"connection":"127.0.0.1:43439-127.0.0.1:47550-1","totalTimeMs":1.102,"requestQueueTimeMs":0.136,"localTimeMs":0.743,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.147,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:17 22:40:17.693 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:17 22:40:17.693 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47560 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:17 22:40:17.693 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:47560 22:40:17 22:40:17.694 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 22:40:17 22:40:17.694 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:17 22:40:17.694 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 22:40:17 22:40:17.694 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:17 22:40:17.694 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:17 22:40:17.695 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:17 22:40:17.695 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:17 22:40:17.696 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:17 22:40:17.696 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:17 22:40:17.696 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:17 22:40:17.696 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 22:40:17 22:40:17.696 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 22:40:17 22:40:17.696 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:17 22:40:17.697 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:17 22:40:17.697 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:17 22:40:17.697 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:17 22:40:17.697 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 22:40:17 22:40:17.697 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 22:40:17 22:40:17.697 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 22:40:17 22:40:17.697 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 22:40:17 22:40:17.697 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:17 22:40:17.700 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:17 22:40:17.700 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47560-2","totalTimeMs":1.324,"requestQueueTimeMs":0.22,"localTimeMs":0.793,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.099,"sendTimeMs":0.21,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:17 22:40:17.701 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:17 22:40:17.703 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14991, validateOnly=false) to localhost:43439 (id: 1 rack: null). correlationId=3, timeoutMs=14991 22:40:17 22:40:17.706 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14991 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14991, validateOnly=false) 22:40:17 22:40:17.729 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 22:40:17 22:40:17.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 22:40:17 22:40:17.731 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 22:40:17 22:40:17.732 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.732 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.732 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.733 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 22:40:17 22:40:17.762 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 22:40:17 22:40:17.770 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 22:40:17 22:40:17.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:17 22:40:17.774 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48020535595 22:40:17 22:40:17.778 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 50884325138 22:40:17 22:40:17.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 22:40:17 22:40:17.779 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:17 22:40:17.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 54467201819 22:40:17 22:40:17.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 22:40:17 22:40:17.780 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 22:40:17 22:40:17.791 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54467201819 22:40:17 22:40:17.792 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56026179914 22:40:17 22:40:17.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 22:40:17 22:40:17.793 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:17 22:40:17.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 57235033257 22:40:17 22:40:17.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 22:40:17 22:40:17.794 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:17 22:40:17.794 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000001ca2b0000 22:40:17 22:40:17.794 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 22:40:17 22:40:17.794 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2250524c443537304552644b33366873624361776c4a41222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 22:40:17 22:40:17.796 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:17 22:40:17.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:17 22:40:17.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.797 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1731192015151,1731192015151,0,1,0,0,0,1,32} 22:40:17 22:40:17.798 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 22:40:17 22:40:17.800 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.801 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2250524c443537304552644b33366873624361776c4a41222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1731192017791,1731192017791,0,0,0,0,116,0,32} 22:40:17 22:40:17.804 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 22:40:17 22:40:17.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.805 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.805 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2250524c443537304552644b33366873624361776c4a41222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1731192017791,1731192017791,0,0,0,0,116,0,32} 22:40:17 22:40:17.814 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(PRLD570ERdK36hsbCawlJA),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 22:40:17 22:40:17.815 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 22:40:17 22:40:17.817 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:17 22:40:17.818 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:17 22:40:17.823 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:17 22:40:17.831 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.831 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.831 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.831 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.831 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 57235033257 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 57235033257 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56385541903 22:40:17 22:40:17.832 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60520920479 22:40:17 22:40:17.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 22:40:17 22:40:17.834 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:17 22:40:17.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 60520920479 22:40:17 22:40:17.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 22:40:17 22:40:17.835 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 22:40:17 22:40:17.837 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.837 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.837 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.837 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60520920479 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60520920479 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58290491268 22:40:17 22:40:17.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59025552851 22:40:17 22:40:17.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 22:40:17 22:40:17.841 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:17 22:40:17.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 59025552851 22:40:17 22:40:17.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 22:40:17 22:40:17.841 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 22:40:17 22:40:17.845 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59025552851 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:17 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:17 22:40:17.846 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.847 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.847 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59025552851 22:40:17 22:40:17.847 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60282189913 22:40:17 22:40:17.847 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62906004297 22:40:17 22:40:17.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 22:40:17 22:40:17.848 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:17 22:40:17.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 62906004297 22:40:17 22:40:17.848 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 22:40:17 22:40:17.848 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 22:40:17 22:40:17.855 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:17 22:40:17.857 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 22:40:17 22:40:17.861 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=43439)]) 22:40:17 22:40:17.862 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 22:40:17 22:40:17.863 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:17 22:40:17.874 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 22:40:17 22:40:17.909 [data-plane-kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 22:40:17 22:40:17.910 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 22:40:17 22:40:17.926 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:17 22:40:17.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 22:40:17 22:40:17.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 22:40:17 22:40:17.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:17 22:40:17.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:17 ] 22:40:17 22:40:17.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:17 , 'ip,'127.0.0.1 22:40:17 ] 22:40:17 22:40:17.927 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1731192017777,1731192017777,0,0,0,0,25,0,31} 22:40:17 22:40:17.980 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:17 22:40:17.983 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 22:40:17 22:40:17.984 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:17 22:40:17.984 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:17 22:40:17.990 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.004 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.007 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.010 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit3067120233997490679/my-test-topic-0 with properties {} 22:40:18 22:40:18.011 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 22:40:18 22:40:18.012 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 22:40:18 22:40:18.013 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(PRLD570ERdK36hsbCawlJA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.016 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.027 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 22:40:18 22:40:18.032 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 161ms correlationId 1 from controller 1 for 1 partitions 22:40:18 22:40:18.040 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=PRLD570ERdK36hsbCawlJA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 22:40:18 22:40:18.041 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=43439, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 22:40:18 22:40:18.041 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":43439}]},"response":{"errorCode":0,"topics":[{"topicId":"PRLD570ERdK36hsbCawlJA","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47524-0","totalTimeMs":176.877,"requestQueueTimeMs":7.717,"localTimeMs":168.298,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.252,"sendTimeMs":0.608,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:18 22:40:18.051 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 22:40:18 22:40:18.059 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 22:40:18 22:40:18.059 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 22:40:18 22:40:18.060 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 22:40:18 22:40:18.064 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 22:40:18 22:40:18.064 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 22:40:18 22:40:18.066 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 22:40:18 22:40:18.067 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:43439-127.0.0.1:47550-1) disconnected 22:40:18 java.io.EOFException: null 22:40:18 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:18 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:18 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:18 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:18 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:18 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:18 at kafka.network.Processor.poll(SocketServer.scala:1055) 22:40:18 at kafka.network.Processor.run(SocketServer.scala:959) 22:40:18 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:18 22:40:18.068 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 22:40:18 22:40:18.068 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 22:40:18 22:40:18.068 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 22:40:18 22:40:18.068 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 22:40:18 22:40:18.068 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 22:40:18 22:40:18.069 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14991,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47560-2","totalTimeMs":353.2,"requestQueueTimeMs":2.901,"localTimeMs":112.7,"remoteTimeMs":237.077,"throttleTimeMs":0,"responseQueueTimeMs":0.116,"sendTimeMs":0.404,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.070 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:43439-127.0.0.1:47560-2) disconnected 22:40:18 java.io.EOFException: null 22:40:18 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:18 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:18 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:18 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:18 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:18 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:18 at kafka.network.Processor.poll(SocketServer.scala:1055) 22:40:18 at kafka.network.Processor.run(SocketServer.scala:959) 22:40:18 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:18 22:40:18.072 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":43439,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47524-0","totalTimeMs":28.728,"requestQueueTimeMs":4.555,"localTimeMs":13.354,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":9.544,"sendTimeMs":1.274,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:18 22:40:18.075 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 22:40:18 22:40:18.096 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 22:40:18 allow.auto.create.topics = false 22:40:18 auto.commit.interval.ms = 5000 22:40:18 auto.offset.reset = latest 22:40:18 bootstrap.servers = [SASL_PLAINTEXT://localhost:43439] 22:40:18 check.crcs = true 22:40:18 client.dns.lookup = use_all_dns_ips 22:40:18 client.id = mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415 22:40:18 client.rack = 22:40:18 connections.max.idle.ms = 540000 22:40:18 default.api.timeout.ms = 60000 22:40:18 enable.auto.commit = true 22:40:18 exclude.internal.topics = true 22:40:18 fetch.max.bytes = 52428800 22:40:18 fetch.max.wait.ms = 500 22:40:18 fetch.min.bytes = 1 22:40:18 group.id = mso-group 22:40:18 group.instance.id = null 22:40:18 heartbeat.interval.ms = 3000 22:40:18 interceptor.classes = [] 22:40:18 internal.leave.group.on.close = true 22:40:18 internal.throw.on.fetch.stable.offset.unsupported = false 22:40:18 isolation.level = read_uncommitted 22:40:18 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 22:40:18 max.partition.fetch.bytes = 1048576 22:40:18 max.poll.interval.ms = 600000 22:40:18 max.poll.records = 500 22:40:18 metadata.max.age.ms = 300000 22:40:18 metric.reporters = [] 22:40:18 metrics.num.samples = 2 22:40:18 metrics.recording.level = INFO 22:40:18 metrics.sample.window.ms = 30000 22:40:18 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 22:40:18 receive.buffer.bytes = 65536 22:40:18 reconnect.backoff.max.ms = 1000 22:40:18 reconnect.backoff.ms = 50 22:40:18 request.timeout.ms = 30000 22:40:18 retry.backoff.ms = 100 22:40:18 sasl.client.callback.handler.class = null 22:40:18 sasl.jaas.config = [hidden] 22:40:18 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:18 sasl.kerberos.min.time.before.relogin = 60000 22:40:18 sasl.kerberos.service.name = null 22:40:18 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:18 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:18 sasl.login.callback.handler.class = null 22:40:18 sasl.login.class = null 22:40:18 sasl.login.connect.timeout.ms = null 22:40:18 sasl.login.read.timeout.ms = null 22:40:18 sasl.login.refresh.buffer.seconds = 300 22:40:18 sasl.login.refresh.min.period.seconds = 60 22:40:18 sasl.login.refresh.window.factor = 0.8 22:40:18 sasl.login.refresh.window.jitter = 0.05 22:40:18 sasl.login.retry.backoff.max.ms = 10000 22:40:18 sasl.login.retry.backoff.ms = 100 22:40:18 sasl.mechanism = PLAIN 22:40:18 sasl.oauthbearer.clock.skew.seconds = 30 22:40:18 sasl.oauthbearer.expected.audience = null 22:40:18 sasl.oauthbearer.expected.issuer = null 22:40:18 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:18 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:18 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:18 sasl.oauthbearer.jwks.endpoint.url = null 22:40:18 sasl.oauthbearer.scope.claim.name = scope 22:40:18 sasl.oauthbearer.sub.claim.name = sub 22:40:18 sasl.oauthbearer.token.endpoint.url = null 22:40:18 security.protocol = SASL_PLAINTEXT 22:40:18 security.providers = null 22:40:18 send.buffer.bytes = 131072 22:40:18 session.timeout.ms = 50000 22:40:18 socket.connection.setup.timeout.max.ms = 30000 22:40:18 socket.connection.setup.timeout.ms = 10000 22:40:18 ssl.cipher.suites = null 22:40:18 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:18 ssl.endpoint.identification.algorithm = https 22:40:18 ssl.engine.factory.class = null 22:40:18 ssl.key.password = null 22:40:18 ssl.keymanager.algorithm = SunX509 22:40:18 ssl.keystore.certificate.chain = null 22:40:18 ssl.keystore.key = null 22:40:18 ssl.keystore.location = null 22:40:18 ssl.keystore.password = null 22:40:18 ssl.keystore.type = JKS 22:40:18 ssl.protocol = TLSv1.3 22:40:18 ssl.provider = null 22:40:18 ssl.secure.random.implementation = null 22:40:18 ssl.trustmanager.algorithm = PKIX 22:40:18 ssl.truststore.certificates = null 22:40:18 ssl.truststore.location = null 22:40:18 ssl.truststore.password = null 22:40:18 ssl.truststore.type = JKS 22:40:18 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 22:40:18 22:40:18 22:40:18.097 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initializing the Kafka consumer 22:40:18 22:40:18.109 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 22:40:18 22:40:18.163 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:18 22:40:18.163 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:18 22:40:18.163 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192018163 22:40:18 22:40:18.164 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Kafka consumer initialized 22:40:18 22:40:18.165 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Subscribed to topic(s): my-test-topic 22:40:18 22:40:18.166 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: -1 rack: null) 22:40:18 22:40:18.171 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:18 22:40:18.171 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:18 22:40:18.171 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:18 22:40:18.171 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:18 22:40:18.172 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:47566 22:40:18 22:40:18.172 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47566 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:18 22:40:18.173 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 22:40:18 22:40:18.173 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:18 22:40:18.173 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:18 22:40:18.174 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:18 22:40:18.174 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:18 22:40:18.174 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed connection to node -1. Fetching API versions. 22:40:18 22:40:18.175 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:18 22:40:18.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:18 22:40:18.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:18 22:40:18.177 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:18 22:40:18.177 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:18 22:40:18.177 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INITIAL 22:40:18 22:40:18.177 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:18 22:40:18.177 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:18 22:40:18.177 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:18 22:40:18.177 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INTERMEDIATE 22:40:18 22:40:18.178 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to COMPLETE 22:40:18 22:40:18.178 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 22:40:18 22:40:18.178 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 22:40:18 22:40:18.178 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating API versions fetch from node -1. 22:40:18 22:40:18.178 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:18 22:40:18.182 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47566-2","totalTimeMs":1.717,"requestQueueTimeMs":0.315,"localTimeMs":1.137,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.076,"sendTimeMs":0.187,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:18 22:40:18.182 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:18 22:40:18.184 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:18 22:40:18.186 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: -1 rack: null) 22:40:18 22:40:18.187 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.189 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.197 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47566-2","totalTimeMs":8.392,"requestQueueTimeMs":0.856,"localTimeMs":7.317,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.129,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.197 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.201 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.202 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.202 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.202 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to PRLD570ERdK36hsbCawlJA 22:40:18 22:40:18.203 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.203 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.203 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.203 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 22:40:18 22:40:18.204 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.204 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:18 22:40:18.204 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:18 22:40:18.204 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.204 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.204 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.205 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1731192015151,1731192015151,0,1,0,0,0,1,32} 22:40:18 22:40:18.207 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Cluster ID: OB576afJSxuIrhF9nQXaaA 22:40:18 22:40:18.207 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.211 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 22:40:18 22:40:18.212 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.213 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 22:40:18 22:40:18.213 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 22:40:18 22:40:18.213 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62906004297 22:40:18 22:40:18.214 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62596087633 22:40:18 22:40:18.215 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 22:40:18 22:40:18.215 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 22:40:18 22:40:18.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 66028912421 22:40:18 22:40:18.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 22:40:18 22:40:18.216 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 66028912421 22:40:18 22:40:18.222 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66145824039 22:40:18 22:40:18.222 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 22:40:18 22:40:18.223 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.223 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 66567624285 22:40:18 22:40:18.223 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 22:40:18 22:40:18.223 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:18 22:40:18.223 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000001ca2b0000 22:40:18 22:40:18.223 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 22:40:18 22:40:18.223 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22456e6578487233665341366c65645a6c4f5732515a67222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 22:40:18 22:40:18.224 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:18 22:40:18.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 22:40:18 22:40:18.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.225 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1731192015151,1731192015151,0,2,0,0,0,2,38} 22:40:18 22:40:18.226 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.226 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 22:40:18 22:40:18.226 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22456e6578487233665341366c65645a6c4f5732515a67222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1731192018221,1731192018221,0,0,0,0,548,0,38} 22:40:18 22:40:18.230 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.233 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(EnexHr3fSA6ledZlOW2QZg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 22:40:18 22:40:18.233 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.234 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.235 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 22:40:18 22:40:18.236 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:18 22:40:18.238 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.238 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47566-2","totalTimeMs":40.089,"requestQueueTimeMs":1.058,"localTimeMs":38.681,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.135,"sendTimeMs":0.214,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.238 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018237, latencyMs=70, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.238 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.239 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.240 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 66567624285 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 66567624285 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 67291059411 22:40:18 22:40:18.245 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68112585654 22:40:18 22:40:18.246 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 22:40:18 22:40:18.246 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.246 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 68112585654 22:40:18 22:40:18.246 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 22:40:18 22:40:18.247 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 68112585654 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.249 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 68112585654 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 67391027354 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71653523145 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71653523145 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71653523145 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72506651416 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73315732948 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73315732948 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.250 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73315732948 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75570820273 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76399757609 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76399757609 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76399757609 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74311186922 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74949811082 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74949811082 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74949811082 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76701158340 22:40:18 22:40:18.251 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79230491813 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 71653523145 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 73315732948 22:40:18 22:40:18.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 22:40:18 22:40:18.252 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79230491813 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79230491813 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 78878548214 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81603146118 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81603146118 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 81603146118 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79383154721 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 82904383399 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 82904383399 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 82904383399 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85664185789 22:40:18 22:40:18.253 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86633720285 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 76399757609 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 74949811082 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 79230491813 22:40:18 22:40:18.254 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 81603146118 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.255 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 82904383399 22:40:18 22:40:18.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 22:40:18 22:40:18.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 22:40:18 22:40:18.256 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 86633720285 22:40:18 22:40:18.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 22:40:18 22:40:18.256 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.256 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.256 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 22:40:18 22:40:18.257 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 22:40:18 22:40:18.257 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 22:40:18 22:40:18.257 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 22:40:18 22:40:18.257 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 22:40:18 22:40:18.257 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 22:40:18 22:40:18.258 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 22:40:18 22:40:18.258 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 22:40:18 22:40:18.256 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.258 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.258 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 86633720285 22:40:18 22:40:18.258 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.258 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.258 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 86633720285 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87356861014 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89928020659 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89928020659 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89928020659 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 89842691074 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90404042014 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.259 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90404042014 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90404042014 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88244737731 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88982253048 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88982253048 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88982253048 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90232217269 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92511743569 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92511743569 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92511743569 22:40:18 22:40:18.260 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91058658935 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92195358609 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92195358609 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92195358609 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92786069730 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94809225898 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94809225898 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94809225898 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96999691405 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97876734351 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97876734351 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.261 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97876734351 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97633506719 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98283830107 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98283830107 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98283830107 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97965199479 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100211377574 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100211377574 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100211377574 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100666871415 22:40:18 22:40:18.262 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102713153298 22:40:18 22:40:18.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 22:40:18 22:40:18.264 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 89928020659 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 90404042014 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 88982253048 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 92511743569 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 22:40:18 22:40:18.265 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 92195358609 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 94809225898 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 97876734351 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 98283830107 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 100211377574 22:40:18 22:40:18.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 22:40:18 22:40:18.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 22:40:18 22:40:18.267 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 102713153298 22:40:18 22:40:18.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 22:40:18 22:40:18.267 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 22:40:18 22:40:18.267 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 22:40:18 22:40:18.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 22:40:18 22:40:18.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 22:40:18 22:40:18.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 22:40:18 22:40:18.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 22:40:18 22:40:18.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 22:40:18 22:40:18.269 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 22:40:18 22:40:18.269 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 22:40:18 22:40:18.269 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 102713153298 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 102713153298 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 100465932597 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103071404533 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103071404533 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103071404533 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103801021236 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104942243416 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.271 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104942243416 22:40:18 22:40:18.272 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.272 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.272 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.272 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 22:40:18 22:40:18.272 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 103071404533 22:40:18 22:40:18.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 22:40:18 22:40:18.272 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.273 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104942243416 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104550374926 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106719283585 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 22:40:18 22:40:18.273 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.274 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 104942243416 22:40:18 22:40:18.274 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 22:40:18 22:40:18.274 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 22:40:18 22:40:18.273 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.274 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.274 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106719283585 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106719283585 22:40:18 22:40:18.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105695494736 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106497660405 22:40:18 22:40:18.275 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.275 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 106719283585 22:40:18 22:40:18.275 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 22:40:18 22:40:18.276 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106497660405 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106497660405 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108775845978 22:40:18 22:40:18.276 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109879723188 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109879723188 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109879723188 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108143330958 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111965132806 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111965132806 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111965132806 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113043428400 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114422057681 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114422057681 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.277 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 22:40:18 22:40:18.278 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.278 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 106497660405 22:40:18 22:40:18.278 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 22:40:18 22:40:18.278 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 22:40:18 22:40:18.278 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.278 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.279 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.279 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114422057681 22:40:18 22:40:18.279 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114369995040 22:40:18 22:40:18.279 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117490849319 22:40:18 22:40:18.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 22:40:18 22:40:18.279 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 109879723188 22:40:18 22:40:18.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 22:40:18 22:40:18.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 22:40:18 22:40:18.280 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 22:40:18 22:40:18.280 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 111965132806 22:40:18 22:40:18.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 22:40:18 22:40:18.281 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 22:40:18 22:40:18.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 22:40:18 22:40:18.281 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 114422057681 22:40:18 22:40:18.281 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 22:40:18 22:40:18.282 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 22:40:18 22:40:18.282 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.282 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.282 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.283 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.283 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117490849319 22:40:18 22:40:18.283 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 22:40:18 22:40:18.283 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.283 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 117490849319 22:40:18 22:40:18.283 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 22:40:18 22:40:18.283 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.284 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 117490849319 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115289748748 22:40:18 22:40:18.284 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116454306564 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116454306564 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.285 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 22:40:18 22:40:18.285 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 116454306564 22:40:18 22:40:18.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 22:40:18 22:40:18.286 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 22:40:18 22:40:18.286 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 116454306564 22:40:18 22:40:18.286 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119901962689 22:40:18 22:40:18.286 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124193183906 22:40:18 22:40:18.286 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124193183906 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 22:40:18 22:40:18.287 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 124193183906 22:40:18 22:40:18.287 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 22:40:18 22:40:18.288 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 22:40:18 22:40:18.287 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.288 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 124193183906 22:40:18 22:40:18.288 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120542259416 22:40:18 22:40:18.288 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124528053913 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124528053913 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 22:40:18 22:40:18.289 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 124528053913 22:40:18 22:40:18.289 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 22:40:18 22:40:18.290 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 22:40:18 22:40:18.289 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.290 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.290 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.290 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 124528053913 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125152605418 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127783736040 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127783736040 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.291 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 22:40:18 22:40:18.291 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.291 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 127783736040 22:40:18 22:40:18.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 22:40:18 22:40:18.292 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 22:40:18 22:40:18.291 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.292 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.292 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.292 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 127783736040 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129948629387 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130602013500 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130602013500 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.293 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 22:40:18 22:40:18.293 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 130602013500 22:40:18 22:40:18.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 22:40:18 22:40:18.294 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 22:40:18 22:40:18.294 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 130602013500 22:40:18 22:40:18.294 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 127916160057 22:40:18 22:40:18.294 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129067227839 22:40:18 22:40:18.294 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.294 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129067227839 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129067227839 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129285457891 22:40:18 22:40:18.295 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132451927950 22:40:18 22:40:18.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 22:40:18 22:40:18.295 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 129067227839 22:40:18 22:40:18.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 22:40:18 22:40:18.296 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 22:40:18 22:40:18.296 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.296 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.296 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.296 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 22:40:18 22:40:18.297 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.297 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 132451927950 22:40:18 22:40:18.297 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 22:40:18 22:40:18.297 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 22:40:18 22:40:18.297 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.297 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 132451927950 22:40:18 22:40:18.297 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.297 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:18 22:40:18.297 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.298 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:18 22:40:18.298 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 132451927950 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132365270973 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 132769214894 22:40:18 22:40:18.298 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:18 22:40:18.298 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:18 22:40:18.298 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47582 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.298 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:47582 22:40:18 22:40:18.298 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.299 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 22:40:18 22:40:18.299 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.299 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 132769214894 22:40:18 22:40:18.299 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 132769214894 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 132769214894 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134907579467 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138753557975 22:40:18 22:40:18.299 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.299 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138753557975 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.300 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.300 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.300 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed connection to node 1. Fetching API versions. 22:40:18 22:40:18.300 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 22:40:18 22:40:18.300 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:18 22:40:18.300 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.300 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:18 22:40:18.300 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 138753557975 22:40:18 22:40:18.300 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 22:40:18 22:40:18.300 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 138753557975 22:40:18 22:40:18.301 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:18 22:40:18.301 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137855907484 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141421903729 22:40:18 22:40:18.301 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:18 22:40:18.301 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:18 22:40:18.301 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.301 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.301 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141421903729 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.302 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INITIAL 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.302 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.302 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INTERMEDIATE 22:40:18 22:40:18.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:18 22:40:18.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 22:40:18 22:40:18.303 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:18 22:40:18.303 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:18 22:40:18.303 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.303 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:18 22:40:18.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 141421903729 22:40:18 22:40:18.303 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to COMPLETE 22:40:18 22:40:18.303 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 22:40:18 22:40:18.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 22:40:18 22:40:18.303 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 22:40:18 22:40:18.303 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating API versions fetch from node 1. 22:40:18 22:40:18.303 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:18 22:40:18.303 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 22:40:18 22:40:18.303 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 141421903729 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142115827639 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145697870480 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145697870480 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.304 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 22:40:18 22:40:18.305 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.306 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 145697870480 22:40:18 22:40:18.306 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 22:40:18 22:40:18.304 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.306 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.306 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 145697870480 22:40:18 22:40:18.306 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146285714113 22:40:18 22:40:18.306 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146315774849 22:40:18 22:40:18.306 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:18 22:40:18.306 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146315774849 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146315774849 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144213421660 22:40:18 22:40:18.307 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146380350430 22:40:18 22:40:18.307 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 22:40:18 22:40:18.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 22:40:18 22:40:18.307 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.63,"requestQueueTimeMs":0.258,"localTimeMs":0.932,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.214,"sendTimeMs":0.225,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:18 22:40:18.307 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.307 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 146315774849 22:40:18 22:40:18.308 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146380350430 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146380350430 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147949037000 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148056669869 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.308 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.308 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 22:40:18 22:40:18.308 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.308 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 146380350430 22:40:18 22:40:18.308 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7e zxid:0x4e txntype:14 reqpath:n/a 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148056669869 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148056669869 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148913766198 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151050966826 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151050966826 22:40:18 22:40:18.309 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.309 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 22:40:18 22:40:18.309 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 22:40:18 22:40:18.309 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 126,14 replyHeader:: 126,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 22:40:18 22:40:18.310 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 148056669869 22:40:18 22:40:18.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x7f zxid:0x4f txntype:14 reqpath:n/a 22:40:18 22:40:18.310 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:18 22:40:18.310 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.310 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.310 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 22:40:18 22:40:18.310 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151050966826 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151904898137 22:40:18 22:40:18.311 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153917397292 22:40:18 22:40:18.311 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 22:40:18 22:40:18.311 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.311 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 151050966826 22:40:18 22:40:18.311 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x80 zxid:0x50 txntype:14 reqpath:n/a 22:40:18 22:40:18.311 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153917397292 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.312 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.312 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 22:40:18 22:40:18.313 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.313 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 153917397292 22:40:18 22:40:18.313 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x81 zxid:0x51 txntype:14 reqpath:n/a 22:40:18 22:40:18.313 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.313 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.313 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.843,"requestQueueTimeMs":0.166,"localTimeMs":1.418,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.169,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.313 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 22:40:18 22:40:18.313 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.313 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 153917397292 22:40:18 22:40:18.313 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151876253457 22:40:18 22:40:18.313 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.313 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155763917233 22:40:18 22:40:18.314 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.314 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.314 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.314 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.314 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.314 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.314 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 22:40:18 22:40:18.314 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155763917233 22:40:18 22:40:18.315 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.315 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 155763917233 22:40:18 22:40:18.315 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x82 zxid:0x52 txntype:14 reqpath:n/a 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.317 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 155763917233 22:40:18 22:40:18.317 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155822436970 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158851249828 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158851249828 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 22:40:18 22:40:18.319 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 158851249828 22:40:18 22:40:18.319 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x83 zxid:0x53 txntype:14 reqpath:n/a 22:40:18 22:40:18.319 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 22:40:18 22:40:18.318 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.319 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.319 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 158851249828 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158454671102 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159898840430 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159898840430 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.320 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159898840430 22:40:18 22:40:18.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 22:40:18 22:40:18.321 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 159898840430 22:40:18 22:40:18.321 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x84 zxid:0x54 txntype:14 reqpath:n/a 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159550288543 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163181494821 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163181494821 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 163181494821 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 165254719432 22:40:18 22:40:18.321 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167358948938 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 163181494821 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x85 zxid:0x55 txntype:14 reqpath:n/a 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167358948938 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167358948938 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167098679994 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170250143814 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.322 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170250143814 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.322 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 167358948938 22:40:18 22:40:18.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x86 zxid:0x56 txntype:14 reqpath:n/a 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170250143814 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171139212578 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174934200018 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174934200018 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 22:40:18 22:40:18.323 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 170250143814 22:40:18 22:40:18.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x87 zxid:0x57 txntype:14 reqpath:n/a 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.323 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.324 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174934200018 22:40:18 22:40:18.324 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174181995395 22:40:18 22:40:18.324 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174390271799 22:40:18 22:40:18.324 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 22:40:18 22:40:18.324 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 174934200018 22:40:18 22:40:18.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x88 zxid:0x58 txntype:14 reqpath:n/a 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 174390271799 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x89 zxid:0x59 txntype:14 reqpath:n/a 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.327 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 22:40:18 22:40:18.327 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 22:40:18 22:40:18.327 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 22:40:18 22:40:18.328 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 22:40:18 22:40:18.328 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 22:40:18 22:40:18.328 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 22:40:18 22:40:18.328 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 138,3 replyHeader:: 138,89,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.335 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.335 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 139,3 replyHeader:: 139,89,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.337 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.337 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.338 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.338 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018338, latencyMs=24, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.339 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.339 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.339 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":23.545,"requestQueueTimeMs":0.118,"localTimeMs":23.134,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.088,"sendTimeMs":0.203,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.346 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 174390271799 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.347 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 174390271799 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170342989743 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173614962196 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.348 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173614962196 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.349 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.350 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173614962196 22:40:18 22:40:18.350 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176077108620 22:40:18 22:40:18.350 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 22:40:18 22:40:18.351 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.351 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 173614962196 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178761228903 22:40:18 22:40:18.351 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.351 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178761228903 22:40:18 22:40:18.352 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.352 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.352 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 22:40:18 22:40:18.352 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178761228903 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180851044414 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182815736113 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182815736113 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.353 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.353 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 22:40:18 22:40:18.354 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.354 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.354 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.354 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.354 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 178761228903 22:40:18 22:40:18.354 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182815736113 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185306101544 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186986108034 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186986108034 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.355 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.355 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186986108034 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186134623242 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189018902397 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189018902397 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.356 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.357 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.357 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 22:40:18 22:40:18.357 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.357 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 182815736113 22:40:18 22:40:18.358 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 22:40:18 22:40:18.358 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189018902397 22:40:18 22:40:18.358 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187647323909 22:40:18 22:40:18.358 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189181690956 22:40:18 22:40:18.358 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 22:40:18 22:40:18.358 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.358 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 22:40:18 22:40:18.359 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 186986108034 22:40:18 22:40:18.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189181690956 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189181690956 22:40:18 22:40:18.359 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186592896462 22:40:18 22:40:18.359 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190882625313 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190882625313 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 190882625313 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193666702767 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195330752684 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195330752684 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195330752684 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195712059902 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196065921467 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.360 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196065921467 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196065921467 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195903967629 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199675901362 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199675901362 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199675901362 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200869672693 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204686271561 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204686271561 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204686271561 22:40:18 22:40:18.361 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206151619966 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206232887819 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.362 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 189018902397 22:40:18 22:40:18.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206232887819 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.362 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 189181690956 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206232887819 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207067212249 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207323869883 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 190882625313 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207323869883 22:40:18 22:40:18.363 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.363 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.364 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 22:40:18 22:40:18.364 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 22:40:18 22:40:18.364 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 22:40:18 22:40:18.365 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.365 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 195330752684 22:40:18 22:40:18.365 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 22:40:18 22:40:18.365 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.365 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.365 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207323869883 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206612298029 22:40:18 22:40:18.365 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209115618050 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209115618050 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 209115618050 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211191938353 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215469135667 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215469135667 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.366 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215469135667 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217545963782 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219474764410 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219474764410 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219474764410 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220237617704 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220324773961 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 22:40:18 22:40:18.367 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220324773961 22:40:18 22:40:18.369 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.370 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.370 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 196065921467 22:40:18 22:40:18.370 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220324773961 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219608877335 22:40:18 22:40:18.370 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223030348349 22:40:18 22:40:18.371 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 22:40:18 22:40:18.371 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 22:40:18 22:40:18.372 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 199675901362 22:40:18 22:40:18.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223030348349 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223030348349 22:40:18 22:40:18.372 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 22:40:18 22:40:18.372 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223707236618 22:40:18 22:40:18.373 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227608653778 22:40:18 22:40:18.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 22:40:18 22:40:18.375 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 204686271561 22:40:18 22:40:18.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 206232887819 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 207323869883 22:40:18 22:40:18.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 22:40:18 22:40:18.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 22:40:18 22:40:18.377 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 22:40:18 22:40:18.379 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 22:40:18 22:40:18.379 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 22:40:18 22:40:18.380 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 209115618050 22:40:18 22:40:18.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 22:40:18 22:40:18.380 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.380 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.380 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.380 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227608653778 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227608653778 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226952734999 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227929190398 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.381 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227929190398 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 227929190398 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228847105804 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231615640546 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231615640546 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231615640546 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231239969428 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232877172419 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232877172419 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.382 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232877172419 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234587258173 22:40:18 22:40:18.383 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236342908684 22:40:18 22:40:18.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 22:40:18 22:40:18.392 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 22:40:18 22:40:18.392 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 215469135667 22:40:18 22:40:18.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 22:40:18 22:40:18.393 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236342908684 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.394 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236342908684 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232617473418 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234502403278 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234502403278 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234502403278 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237207920841 22:40:18 22:40:18.395 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238853952842 22:40:18 22:40:18.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 22:40:18 22:40:18.400 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 219474764410 22:40:18 22:40:18.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 22:40:18 22:40:18.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 22:40:18 22:40:18.401 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 22:40:18 22:40:18.401 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 220324773961 22:40:18 22:40:18.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 22:40:18 22:40:18.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 22:40:18 22:40:18.402 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 22:40:18 22:40:18.402 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.402 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 223030348349 22:40:18 22:40:18.402 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 22:40:18 22:40:18.402 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.402 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.402 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 22:40:18 22:40:18.402 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.402 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.403 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 22:40:18 22:40:18.403 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 227608653778 22:40:18 22:40:18.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238853952842 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 22:40:18 22:40:18.403 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.403 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 22:40:18 22:40:18.404 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 227929190398 22:40:18 22:40:18.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 22:40:18 22:40:18.404 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238853952842 22:40:18 22:40:18.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 22:40:18 22:40:18.404 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236190761341 22:40:18 22:40:18.404 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 22:40:18 22:40:18.405 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.405 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 231615640546 22:40:18 22:40:18.405 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 22:40:18 22:40:18.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240100257837 22:40:18 22:40:18.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.405 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 22:40:18 22:40:18.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.405 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.406 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 232877172419 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240100257837 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.406 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 22:40:18 22:40:18.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 236342908684 22:40:18 22:40:18.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 22:40:18 22:40:18.406 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240100257837 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241258951087 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241337487922 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241337487922 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241337487922 22:40:18 22:40:18.407 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237865635764 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241226544014 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.407 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241226544014 22:40:18 22:40:18.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 22:40:18 22:40:18.408 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.408 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.408 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 234502403278 22:40:18 22:40:18.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 22:40:18 22:40:18.408 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.408 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 22:40:18 22:40:18.408 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.409 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.409 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 238853952842 22:40:18 22:40:18.409 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 22:40:18 22:40:18.409 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241226544014 22:40:18 22:40:18.409 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 22:40:18 22:40:18.409 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238529544137 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 240100257837 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241060325761 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 241337487922 22:40:18 22:40:18.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241060325761 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241060325761 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239462094150 22:40:18 22:40:18.410 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242925199063 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242925199063 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242925199063 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243165674355 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 241226544014 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 22:40:18 22:40:18.411 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243189542535 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 22:40:18 22:40:18.411 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.411 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 22:40:18 22:40:18.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 241060325761 22:40:18 22:40:18.412 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.412 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243189542535 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.412 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243189542535 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243501044091 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243846024044 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243846024044 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.412 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.412 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243846024044 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241182832471 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245211337903 22:40:18 22:40:18.413 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245211337903 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 22:40:18 22:40:18.413 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.413 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.413 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 242925199063 22:40:18 22:40:18.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245211337903 22:40:18 22:40:18.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247916464266 22:40:18 22:40:18.413 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249412178243 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249412178243 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249412178243 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246273774277 22:40:18 22:40:18.414 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249824466137 22:40:18 22:40:18.414 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.414 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.414 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 243189542535 22:40:18 22:40:18.414 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 22:40:18 22:40:18.414 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 243846024044 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 245211337903 22:40:18 22:40:18.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 249412178243 22:40:18 22:40:18.415 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 22:40:18 22:40:18.416 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 249824466137 22:40:18 22:40:18.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 22:40:18 22:40:18.416 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 22:40:18 22:40:18.417 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 22:40:18 22:40:18.417 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.417 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.417 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.417 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":2.206,"requestQueueTimeMs":0.363,"localTimeMs":1.48,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.086,"sendTimeMs":0.276,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.417 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.417 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.417 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.417 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.418 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 249824466137 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 249824466137 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246721713755 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249031696615 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249031696615 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249031696615 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247408571634 22:40:18 22:40:18.418 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 250849075396 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 250849075396 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 250849075396 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253512389889 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257109534130 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257109534130 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.419 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.420 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 249031696615 22:40:18 22:40:18.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257109534130 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258169737878 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258230202484 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258230202484 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258230202484 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259315377808 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259898250782 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259898250782 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.420 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259898250782 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261768193430 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262824089395 22:40:18 22:40:18.421 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 250849075396 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 22:40:18 22:40:18.421 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.421 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.421 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 257109534130 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262824089395 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 258230202484 22:40:18 22:40:18.422 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262824089395 22:40:18 22:40:18.422 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264675559083 22:40:18 22:40:18.422 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266955551309 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.423 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266955551309 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 22:40:18 22:40:18.423 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 259898250782 22:40:18 22:40:18.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266955551309 22:40:18 22:40:18.423 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 22:40:18 22:40:18.423 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 263475876916 22:40:18 22:40:18.423 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 22:40:18 22:40:18.424 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 262824089395 22:40:18 22:40:18.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267575614932 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267575614932 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267575614932 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265811409847 22:40:18 22:40:18.424 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268744018318 22:40:18 22:40:18.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 22:40:18 22:40:18.444 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.424 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 22:40:18 22:40:18.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 266955551309 22:40:18 22:40:18.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 22:40:18 22:40:18.444 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.445 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.445 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.445 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 267575614932 22:40:18 22:40:18.445 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb6 zxid:0x84 txntype:14 reqpath:n/a 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268744018318 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268744018318 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270650014214 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271018998140 22:40:18 22:40:18.445 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 22:40:18 22:40:18.445 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271018998140 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.446 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.446 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.446 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 268744018318 22:40:18 22:40:18.446 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb7 zxid:0x85 txntype:14 reqpath:n/a 22:40:18 22:40:18.446 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.446 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271018998140 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268626487572 22:40:18 22:40:18.447 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 22:40:18 22:40:18.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270811516209 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.447 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 271018998140 22:40:18 22:40:18.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb8 zxid:0x86 txntype:14 reqpath:n/a 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.447 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270811516209 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270811516209 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270286783626 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271283053822 22:40:18 22:40:18.448 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.448 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.448 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 270811516209 22:40:18 22:40:18.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xb9 zxid:0x87 txntype:14 reqpath:n/a 22:40:18 22:40:18.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xba zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.449 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xba zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.449 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 22:40:18 22:40:18.449 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271283053822 22:40:18 22:40:18.449 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.449 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 186,3 replyHeader:: 186,135,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.449 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 22:40:18 22:40:18.449 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.449 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 271283053822 22:40:18 22:40:18.450 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 22:40:18 22:40:18.449 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.450 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 271283053822 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270691774023 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272695369887 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272695369887 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.451 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.452 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272695369887 22:40:18 22:40:18.452 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270838796311 22:40:18 22:40:18.452 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 273519572880 22:40:18 22:40:18.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 22:40:18 22:40:18.452 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 272695369887 22:40:18 22:40:18.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 22:40:18 22:40:18.452 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 22:40:18 22:40:18.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 22:40:18 22:40:18.453 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 273519572880 22:40:18 22:40:18.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 22:40:18 22:40:18.453 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 273519572880 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 22:40:18 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 273519572880 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 275959031560 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 278819852263 22:40:18 22:40:18.454 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:multi cxid:0xbe zxid:0x8b txntype:14 reqpath:n/a 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 278819852263 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:multi cxid:0xbe zxid:0x8b txntype:14 reqpath:n/a 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.456 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 190,14 replyHeader:: 190,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 22:40:18 22:40:18.456 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 191,3 replyHeader:: 191,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.456 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.457 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.459 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":39.453,"requestQueueTimeMs":0.104,"localTimeMs":38.703,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.197,"sendTimeMs":0.447,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.459 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.460 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018459, latencyMs=42, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.460 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.460 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.473 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.474 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.475 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 22:40:18 22:40:18.476 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 22:40:18 22:40:18.478 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=EnexHr3fSA6ledZlOW2QZg, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=43439)]) 22:40:18 22:40:18.481 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 22:40:18 22:40:18.482 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 22:40:18 22:40:18.484 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:18 22:40:18.517 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.517 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.520 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.520 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.520 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.7,"requestQueueTimeMs":0.176,"localTimeMs":1.219,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.082,"sendTimeMs":0.221,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.520 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.521 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.521 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.524 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.524 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.524 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.525 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 192,3 replyHeader:: 192,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.526 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.526 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 193,3 replyHeader:: 193,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.526 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.527 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.527 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.527 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018527, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.527 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.527 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":5.927,"requestQueueTimeMs":0.094,"localTimeMs":5.577,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.065,"sendTimeMs":0.19,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.527 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.528 [data-plane-kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 22:40:18 22:40:18.528 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 22:40:18 22:40:18.530 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.530 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.530 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.530 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.530 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.530 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.530 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 194,4 replyHeader:: 194,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.535 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.535 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.536 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.536 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.538 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.538 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.539 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 22:40:18 22:40:18.539 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 22:40:18 22:40:18.539 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.539 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.544 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.544 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.549 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.549 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.550 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.550 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.550 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.551 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.552 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.553 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.553 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 22:40:18 22:40:18.553 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 22:40:18 22:40:18.554 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.554 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.557 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.558 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 196,4 replyHeader:: 196,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.563 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.563 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.563 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.563 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.564 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.564 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.567 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.567 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.568 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 22:40:18 22:40:18.568 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 22:40:18 22:40:18.568 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.569 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.573 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.573 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.576 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.576 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.576 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.576 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.576 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.577 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.578 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.579 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.579 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 22:40:18 22:40:18.579 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 22:40:18 22:40:18.579 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.579 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.583 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.586 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 198,4 replyHeader:: 198,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.588 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.589 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.589 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.589 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.589 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.589 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.590 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.591 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.591 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 22:40:18 22:40:18.591 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 22:40:18 22:40:18.591 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.591 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.595 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.595 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 199,4 replyHeader:: 199,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.597 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.597 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.597 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.597 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.598 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.599 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.602 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.603 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.605 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.605 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.606 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.606 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.606 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.612 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.613 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.614 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.614 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 22:40:18 22:40:18.614 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 22:40:18 22:40:18.614 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.614 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.618 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.618 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 201,4 replyHeader:: 201,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.620 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.620 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.621 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.621 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.622 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.622 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.622 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.622 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.624 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.624 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.624 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.624 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.624 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":2.612,"requestQueueTimeMs":0.273,"localTimeMs":1.897,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.084,"sendTimeMs":0.355,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.624 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.624 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.625 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.625 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 22:40:18 22:40:18.625 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 22:40:18 22:40:18.626 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.626 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.627 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.627 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 202,3 replyHeader:: 202,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.628 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.628 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 203,3 replyHeader:: 203,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.629 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.629 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.630 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.630 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018630, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.630 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.630 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.630 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":4.876,"requestQueueTimeMs":0.108,"localTimeMs":4.519,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.182,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.630 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.631 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.633 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.633 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.634 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.634 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.634 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.634 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.635 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.636 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.636 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 22:40:18 22:40:18.636 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 22:40:18 22:40:18.636 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.636 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.640 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.640 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.643 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.645 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.649 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.650 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.651 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.651 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.652 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.652 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.653 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.653 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.654 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.655 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.655 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 22:40:18 22:40:18.655 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 22:40:18 22:40:18.656 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.656 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.662 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.662 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.666 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.668 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.668 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.668 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 22:40:18 22:40:18.668 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 22:40:18 22:40:18.669 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.669 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.674 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.675 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.675 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.679 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.679 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.679 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.681 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.682 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.683 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.686 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.687 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.687 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 22:40:18 22:40:18.687 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 22:40:18 22:40:18.688 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.688 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.693 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.693 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.696 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.696 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.696 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.696 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.696 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.697 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.698 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.703 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.703 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.705 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.705 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.705 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.706 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.706 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.706 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.707 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.708 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.709 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 22:40:18 22:40:18.709 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 22:40:18 22:40:18.710 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.710 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.714 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.714 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.714 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.714 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.714 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.714 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.715 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 211,4 replyHeader:: 211,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.718 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.718 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.719 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.719 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.719 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.721 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.723 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.724 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.728 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.729 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.729 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.729 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.729 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.729 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.729 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":4.001,"requestQueueTimeMs":2.106,"localTimeMs":1.524,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.076,"sendTimeMs":0.294,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.729 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.730 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 22:40:18 22:40:18.733 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.735 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.735 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.735 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 212,3 replyHeader:: 212,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.736 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.736 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 213,3 replyHeader:: 213,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.737 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.737 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.738 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.738 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018738, latencyMs=9, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.738 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.738 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.738 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":8.3,"requestQueueTimeMs":1.883,"localTimeMs":6.155,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.1,"sendTimeMs":0.16,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.740 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 22:40:18 22:40:18.741 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.741 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.750 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.750 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.754 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.754 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.754 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.754 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.755 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.755 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.756 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.756 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.756 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 22:40:18 22:40:18.756 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 22:40:18 22:40:18.756 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.757 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.761 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.761 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.761 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.761 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.761 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.762 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.765 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.768 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.769 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.769 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.769 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.770 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.770 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.771 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.772 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 22:40:18 22:40:18.772 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 22:40:18 22:40:18.772 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.772 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.777 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.777 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.782 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.782 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.782 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.782 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.783 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.783 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.784 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.789 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.790 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.795 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.796 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.796 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.796 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.796 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.797 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.798 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.798 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.799 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 22:40:18 22:40:18.800 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 22:40:18 22:40:18.800 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.800 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.803 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.803 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.804 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.808 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.808 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.808 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.808 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.809 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.809 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.810 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.811 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.811 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 22:40:18 22:40:18.811 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 22:40:18 22:40:18.812 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.812 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.816 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.816 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.816 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.818 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.818 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.819 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.819 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.819 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.819 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.827 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.827 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.828 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 22:40:18 22:40:18.828 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 22:40:18 22:40:18.828 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.828 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.829 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.829 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.832 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.832 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.832 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.832 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.822,"requestQueueTimeMs":0.252,"localTimeMs":1.212,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.281,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.832 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.832 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.838 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.839 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 220,4 replyHeader:: 220,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.839 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 221,3 replyHeader:: 221,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.840 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.840 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.840 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.841 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 222,3 replyHeader:: 222,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.841 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.841 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.842 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.842 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018842, latencyMs=10, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.842 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.842 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.842 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":9.003,"requestQueueTimeMs":0.094,"localTimeMs":8.677,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.065,"sendTimeMs":0.166,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.844 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.844 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.845 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.845 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.845 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.846 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.847 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.847 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.848 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 22:40:18 22:40:18.848 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 22:40:18 22:40:18.848 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.848 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.852 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.853 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.855 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.856 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.856 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.856 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.856 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.857 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.857 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.859 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.859 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 22:40:18 22:40:18.859 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 22:40:18 22:40:18.859 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.859 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.862 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.862 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.863 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.864 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.864 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.865 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.865 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.865 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.865 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.866 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.871 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.871 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.874 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.876 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.876 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.876 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.876 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.876 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.877 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.877 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.877 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 22:40:18 22:40:18.878 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 22:40:18 22:40:18.878 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.878 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.883 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.884 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.892 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.892 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.892 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.892 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.893 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.894 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.895 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.896 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.896 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 22:40:18 22:40:18.896 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 22:40:18 22:40:18.896 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.896 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.901 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.902 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 227,4 replyHeader:: 227,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.905 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.905 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.906 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.906 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.906 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.907 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.907 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.908 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.908 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 22:40:18 22:40:18.909 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 22:40:18 22:40:18.909 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.909 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.913 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.913 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 228,4 replyHeader:: 228,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.924 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.924 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.924 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.924 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.931 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.931 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.931 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:18 22:40:18.933 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.934 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:18 22:40:18.934 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:18 22:40:18.934 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:18 22:40:18.935 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:18 22:40:18.935 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.767,"requestQueueTimeMs":0.323,"localTimeMs":1.118,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.233,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.935 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:18 22:40:18.936 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.936 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.936 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 22:40:18 22:40:18.936 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 22:40:18 22:40:18.937 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.937 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:18 22:40:18.937 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.937 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 229,3 replyHeader:: 229,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:18 22:40:18.940 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:18 22:40:18.941 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 230,3 replyHeader:: 230,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:18 22:40:18.941 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:18 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:18 22:40:18.941 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:18 22:40:18.942 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:18 22:40:18.942 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192018942, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:18 22:40:18.942 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:18 22:40:18.942 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:18 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:18 22:40:18.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":6.954,"requestQueueTimeMs":0.1,"localTimeMs":6.574,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.081,"sendTimeMs":0.198,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:18 22:40:18.944 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.944 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 231,4 replyHeader:: 231,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.949 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.949 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.950 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.950 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.951 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.951 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.953 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.953 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.954 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 22:40:18 22:40:18.954 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 22:40:18 22:40:18.954 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.954 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.958 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.958 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.958 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.961 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.961 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.961 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.961 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.962 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.962 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.962 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.963 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.963 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 22:40:18 22:40:18.963 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 22:40:18 22:40:18.963 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.963 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.967 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.968 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.969 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.969 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.970 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.970 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.970 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.970 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.971 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.975 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.975 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.975 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.975 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.975 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.975 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.975 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.979 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.979 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.980 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.980 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:18 22:40:18.981 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:18 22:40:18.982 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:18 22:40:18.984 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:18 22:40:18.985 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:18 22:40:18.986 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 22:40:18 22:40:18.987 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 22:40:18 22:40:18.987 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:18 22:40:18.987 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:18 22:40:18.992 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:18 22:40:18.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:18 22:40:18.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:18 22:40:18.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:18 ] 22:40:18 22:40:18.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:18 , 'ip,'127.0.0.1 22:40:18 ] 22:40:18 22:40:18.992 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:18 22:40:18.996 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:18 22:40:18.996 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 22:40:18 22:40:18.997 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:18 22:40:18.997 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:18.998 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:18.999 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.001 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.002 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.002 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 22:40:19 22:40:19.002 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 22:40:19 22:40:19.002 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.002 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.006 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.007 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 236,4 replyHeader:: 236,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.011 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.012 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.012 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.012 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.014 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.015 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.016 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.016 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.016 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 22:40:19 22:40:19.016 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 22:40:19 22:40:19.016 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.017 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.035 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.035 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:19 22:40:19.038 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:19 22:40:19.038 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:19 22:40:19.039 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:19 22:40:19.039 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.039 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:19 22:40:19.039 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":2.157,"requestQueueTimeMs":0.381,"localTimeMs":1.3,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.103,"sendTimeMs":0.371,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.041 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:19 22:40:19.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:19 22:40:19.042 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 237,3 replyHeader:: 237,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:19 22:40:19.043 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:19 22:40:19.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:19 22:40:19.043 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 238,3 replyHeader:: 238,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:19 22:40:19.044 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:19 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:19 22:40:19.044 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:19 22:40:19.045 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:19 22:40:19.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192019044, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:19 22:40:19.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:19 22:40:19.045 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":4.854,"requestQueueTimeMs":0.17,"localTimeMs":4.437,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.055,"sendTimeMs":0.19,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:19 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:19 22:40:19.056 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.056 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.057 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.060 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.060 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.060 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.060 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.061 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.061 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.061 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.062 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.062 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 22:40:19 22:40:19.062 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 22:40:19 22:40:19.062 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.062 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.066 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.066 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.070 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.070 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.070 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.070 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.070 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.071 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.075 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.075 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.076 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 22:40:19 22:40:19.076 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 22:40:19 22:40:19.076 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.076 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.080 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.080 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.083 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.083 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.084 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.084 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.084 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.084 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.085 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.085 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.085 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 22:40:19 22:40:19.085 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 22:40:19 22:40:19.085 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.086 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.090 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.091 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 242,4 replyHeader:: 242,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.094 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.095 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.095 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.095 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.095 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.095 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.097 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.097 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.098 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 22:40:19 22:40:19.098 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 22:40:19 22:40:19.098 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.098 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.102 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.104 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 243,4 replyHeader:: 243,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.106 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.106 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.106 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.106 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.106 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.107 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.107 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.107 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.108 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 22:40:19 22:40:19.108 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 22:40:19 22:40:19.108 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.108 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.112 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.112 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 244,4 replyHeader:: 244,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.114 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.114 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.114 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.114 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.115 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.115 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.115 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.116 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.116 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 22:40:19 22:40:19.116 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 22:40:19 22:40:19.116 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.116 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.122 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.122 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.122 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.124 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.125 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.125 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.125 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.125 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.125 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.127 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.128 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.138 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.139 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 22:40:19 22:40:19.139 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 22:40:19 22:40:19.139 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=20) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:19 22:40:19.139 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.139 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.142 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=20): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:19 22:40:19.142 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:19 22:40:19.142 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 11 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:19 22:40:19.142 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.143 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=21) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:19 22:40:19.143 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":20,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":1.936,"requestQueueTimeMs":0.226,"localTimeMs":1.231,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.411,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.144 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.145 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:19 22:40:19.145 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 22:40:19 22:40:19.145 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.145 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 247,3 replyHeader:: 247,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.147 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.148 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.148 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 22:40:19 22:40:19.148 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 22:40:19 22:40:19.148 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.148 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.148 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:19 22:40:19.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 22:40:19 22:40:19.149 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 248,3 replyHeader:: 248,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1731192018221,1731192018221,0,1,0,0,548,1,39} 22:40:19 22:40:19.149 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 22:40:19 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 22:40:19 22:40:19.149 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 22:40:19 22:40:19.150 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=21): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 22:40:19 22:40:19.150 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192019150, latencyMs=8, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=21), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 22:40:19 22:40:19.150 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator lookup failed: 22:40:19 22:40:19.150 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Coordinator discovery failed, refreshing metadata 22:40:19 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 22:40:19 22:40:19.150 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":21,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":6.572,"requestQueueTimeMs":0.105,"localTimeMs":6.252,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.053,"sendTimeMs":0.161,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.152 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.154 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.155 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.156 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.156 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.156 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.156 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.156 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.157 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.162 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.162 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.162 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.162 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.162 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.162 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.162 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.164 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.164 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.164 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.164 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.164 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.165 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.171 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.172 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.174 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.175 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.175 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 22:40:19 22:40:19.175 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 22:40:19 22:40:19.175 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.175 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.178 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.179 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.180 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.182 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.183 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.183 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 22:40:19 22:40:19.183 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 22:40:19 22:40:19.183 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.183 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.186 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.186 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.186 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 253,4 replyHeader:: 253,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.188 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.188 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.189 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.189 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.189 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.189 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.190 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.194 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.194 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.195 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 254,4 replyHeader:: 254,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.196 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.197 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.201 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:19 22:40:19.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 22:40:19 22:40:19.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:19 22:40:19.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:19 ] 22:40:19 22:40:19.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:19 , 'ip,'127.0.0.1 22:40:19 ] 22:40:19 22:40:19.201 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 255,4 replyHeader:: 255,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1731192018214,1731192018214,0,0,0,0,109,0,37} 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit3067120233997490679] Loading producer state till offset 0 with message format version 2 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.203 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 22:40:19 22:40:19.204 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit3067120233997490679/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 22:40:19 22:40:19.204 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 22:40:19 22:40:19.204 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 22:40:19 22:40:19.204 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(EnexHr3fSA6ledZlOW2QZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 22:40:19 22:40:19.204 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 22:40:19 22:40:19.210 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 22:40:19 22:40:19.211 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 22:40:19 22:40:19.214 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.215 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 22:40:19 22:40:19.216 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 22:40:19 22:40:19.217 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.217 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 22:40:19 22:40:19.218 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 22:40:19 22:40:19.222 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 11 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 22:40:19 22:40:19.223 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 22:40:19 22:40:19.223 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 22:40:19 22:40:19.223 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 22:40:19 22:40:19.223 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 22:40:19 22:40:19.223 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.223 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 22:40:19 22:40:19.224 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.224 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 22:40:19 22:40:19.224 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 22:40:19 22:40:19.224 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 22:40:19 22:40:19.225 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 22:40:19 22:40:19.226 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 22:40:19 22:40:19.227 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 22:40:19 22:40:19.228 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 22:40:19 22:40:19.229 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 747ms correlationId 3 from controller 1 for 50 partitions 22:40:19 22:40:19.230 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 22:40:19 22:40:19.230 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.230 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 22:40:19 22:40:19.230 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.230 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 22:40:19 22:40:19.230 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.230 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 22:40:19 22:40:19.230 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.231 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 22:40:19 22:40:19.231 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.231 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 22:40:19 22:40:19.231 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.231 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 22:40:19 22:40:19.231 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.231 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 22:40:19 22:40:19.231 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.231 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 22:40:19 22:40:19.232 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.232 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 22:40:19 22:40:19.233 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.233 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.234 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 22:40:19 22:40:19.234 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.235 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 22:40:19 22:40:19.235 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.236 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 22:40:19 22:40:19.236 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 22:40:19 22:40:19.237 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 22:40:19 22:40:19.237 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 22:40:19 22:40:19.237 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 22:40:19 22:40:19.237 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 22:40:19 22:40:19.237 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 22:40:19 22:40:19.237 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 22:40:19 22:40:19.238 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=EnexHr3fSA6ledZlOW2QZg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 22:40:19 22:40:19.239 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"EnexHr3fSA6ledZlOW2QZg","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":43439}]},"response":{"errorCode":0,"topics":[{"topicId":"EnexHr3fSA6ledZlOW2QZg","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47524-0","totalTimeMs":756.88,"requestQueueTimeMs":1.293,"localTimeMs":747.783,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":7.464,"sendTimeMs":0.339,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:19 22:40:19.240 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=EnexHr3fSA6ledZlOW2QZg, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=43439, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 22:40:19 22:40:19.243 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.243 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 22:40:19 22:40:19.243 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=22) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:19 22:40:19.243 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 22:40:19 22:40:19.245 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"EnexHr3fSA6ledZlOW2QZg","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":43439,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47524-0","totalTimeMs":2.857,"requestQueueTimeMs":0.619,"localTimeMs":1.812,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.255,"sendTimeMs":0.169,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:19 22:40:19.252 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=22): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:19 22:40:19.252 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":22,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":8.727,"requestQueueTimeMs":6.72,"localTimeMs":1.771,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.069,"sendTimeMs":0.165,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.252 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 22:40:19 22:40:19.253 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Updated cluster metadata updateVersion 12 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:19 22:40:19.253 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FindCoordinator request to broker localhost:43439 (id: 1 rack: null) 22:40:19 22:40:19.253 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=23) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=23): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=43439, errorCode=0, errorMessage='')]) 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1731192019256, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=23), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=43439, errorCode=0, errorMessage='')])) 22:40:19 22:40:19.257 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Discovered group coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:19 22:40:19.257 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":23,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":43439,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":3.436,"requestQueueTimeMs":0.089,"localTimeMs":3.006,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.209,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 2147483646 rack: null) using address localhost/127.0.0.1 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:19 22:40:19.257 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:19 22:40:19.257 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:47584 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:19 22:40:19.258 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:47584 22:40:19 22:40:19.267 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 22:40:19 22:40:19.267 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Marking assigned partitions pending for revocation: [] 22:40:19 22:40:19.269 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 22:40:19 22:40:19.271 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 22:40:19 22:40:19.271 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:19 22:40:19.271 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 22:40:19 22:40:19.271 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:19 22:40:19.271 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:19 22:40:19.271 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] (Re-)joining group 22:40:19 22:40:19.272 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:19 22:40:19.272 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Joining group with current subscription: [my-test-topic] 22:40:19 22:40:19.275 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Heartbeat thread started 22:40:19 22:40:19.278 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:19 22:40:19.287 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:19 22:40:19.289 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:19 22:40:19.289 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:19 22:40:19.290 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:19 22:40:19.293 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:19 22:40:19.301 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INITIAL 22:40:19 22:40:19.302 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to INTERMEDIATE 22:40:19 22:40:19.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:19 22:40:19.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:19 22:40:19.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:19 22:40:19.302 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 22:40:19 22:40:19.303 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to COMPLETE 22:40:19 22:40:19.303 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 22:40:19 22:40:19.303 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 22:40:19 22:40:19.303 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating API versions fetch from node 2147483646. 22:40:19 22:40:19.303 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=25) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:19 22:40:19.311 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":25,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":6.335,"requestQueueTimeMs":0.33,"localTimeMs":5.73,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.184,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:19 22:40:19.315 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=25): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:19 22:40:19.316 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:19 22:40:19.317 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=24) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 22:40:19 22:40:19.329 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f and request the member to rejoin with this id. 22:40:19 22:40:19.335 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=24): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', members=[]) 22:40:19 22:40:19.335 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 22:40:19 22:40:19.335 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f 22:40:19 22:40:19.335 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 22:40:19 22:40:19.335 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] (Re-)joining group 22:40:19 22:40:19.335 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Joining group with current subscription: [my-test-topic] 22:40:19 22:40:19.336 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:19 22:40:19.336 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":24,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","members":[]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":17.084,"requestQueueTimeMs":2.157,"localTimeMs":14.58,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.103,"sendTimeMs":0.242,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:19 22:40:19.336 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=26) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 22:40:19 22:40:19.340 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f joins group mso-group in Empty state. Adding to the group now. 22:40:19 22:40:19.345 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:19 22:40:19.349 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 22:40:22 22:40:22.356 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 22:40:22 22:40:22.366 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 22:40:22 22:40:22.371 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:22 22:40:22.374 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 22:40:22 22:40:22.374 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":26,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","skipAssignment":false,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","members":[{"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":3037.226,"requestQueueTimeMs":0.172,"localTimeMs":13.806,"remoteTimeMs":3020.076,"throttleTimeMs":0,"responseQueueTimeMs":2.789,"sendTimeMs":0.381,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:22 22:40:22.375 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=26): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', skipAssignment=false, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 22:40:22 22:40:22.375 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', skipAssignment=false, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 22:40:22 22:40:22.375 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Enabling heartbeat thread 22:40:22 22:40:22.375 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', protocol='range'} 22:40:22 22:40:22.376 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 22:40:22 22:40:22.378 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 22:40:22 22:40:22.382 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f=Assignment(partitions=[my-test-topic-0])} 22:40:22 22:40:22.385 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:43439 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 22:40:22 22:40:22.387 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=27) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 22:40:22 22:40:22.395 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 22:40:22 22:40:22.396 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f for group mso-group for generation 1. The group has 1 members, 0 of which are static. 22:40:22 22:40:22.447 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1731192018876, current time: 1731192022447,unflushed: 1 22:40:22 22:40:22.484 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 22:40:22 22:40:22.488 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 67 ms 22:40:22 22:40:22.501 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:22 22:40:22.502 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=27): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 22:40:22 22:40:22.502 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 22:40:22 22:40:22.502 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', protocol='range'} 22:40:22 22:40:22.502 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f 22:40:22 22:40:22.502 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 22:40:22 22:40:22.502 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":27,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":113.076,"requestQueueTimeMs":2.543,"localTimeMs":109.579,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.576,"sendTimeMs":0.375,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:22 22:40:22.505 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 22:40:22 22:40:22.508 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 22:40:22 22:40:22.510 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=28) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 22:40:22 22:40:22.527 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=28): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 22:40:22 22:40:22.527 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Found no committed offset for partition my-test-topic-0 22:40:22 22:40:22.527 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":28,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":15.415,"requestQueueTimeMs":2.995,"localTimeMs":11.965,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.323,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:22 22:40:22.532 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:43439 (id: 1 rack: null) 22:40:22 22:40:22.535 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=29) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 22:40:22 22:40:22.551 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=29): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 22:40:22 22:40:22.552 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":29,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":15.301,"requestQueueTimeMs":2.227,"localTimeMs":12.705,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.086,"sendTimeMs":0.281,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:22 22:40:22.552 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 22:40:22 22:40:22.554 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 22:40:22 22:40:22.555 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}}. 22:40:22 22:40:22.561 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:22 22:40:22.561 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 22:40:22 22:40:22.562 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:22 22:40:22.564 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=30) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 22:40:22 22:40:22.572 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 22:40:23 22:40:23.104 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1356245021, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1731192023100, lastUsedMs=1731192023100, epoch=1) 22:40:23 22:40:23.109 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1356245021 returning 1 partition(s) 22:40:23 22:40:23.120 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=30): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[FetchableTopicResponse(topic='', topicId=PRLD570ERdK36hsbCawlJA, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 22:40:23 22:40:23.123 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1356245021 with 1 response partition(s) 22:40:23 22:40:23.124 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":30,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"PRLD570ERdK36hsbCawlJA","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[{"topicId":"PRLD570ERdK36hsbCawlJA","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":554.195,"requestQueueTimeMs":3.39,"localTimeMs":22.31,"remoteTimeMs":527.581,"throttleTimeMs":0,"responseQueueTimeMs":0.183,"sendTimeMs":0.728,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:23 22:40:23.125 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 22:40:23 22:40:23.129 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:23 22:40:23.131 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:23 22:40:23.131 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:23 22:40:23.131 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=31) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 22:40:23 22:40:23.136 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:23 22:40:23.644 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:23 22:40:23.646 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=31): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:23 22:40:23.647 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:23 22:40:23.647 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:23 22:40:23.647 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:23 22:40:23.647 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":31,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":512.151,"requestQueueTimeMs":0.326,"localTimeMs":5.17,"remoteTimeMs":506.016,"throttleTimeMs":0,"responseQueueTimeMs":0.206,"sendTimeMs":0.43,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:23 22:40:23.647 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:23 22:40:23.648 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=32) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 22:40:23 22:40:23.649 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:24 22:40:24.178 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:24 22:40:24.184 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=32): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:24 22:40:24.184 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:24 22:40:24.185 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:24 22:40:24.186 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:24 22:40:24.186 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:24 22:40:24.186 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 22:40:24 22:40:24.186 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":32,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":536.43,"requestQueueTimeMs":0.226,"localTimeMs":1.437,"remoteTimeMs":531.833,"throttleTimeMs":0,"responseQueueTimeMs":0.602,"sendTimeMs":2.33,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:24 22:40:24.187 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:24 22:40:24.690 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:24 22:40:24.691 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:24 22:40:24.692 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.447,"requestQueueTimeMs":0.243,"localTimeMs":1.107,"remoteTimeMs":502.564,"throttleTimeMs":0,"responseQueueTimeMs":0.144,"sendTimeMs":0.387,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:24 22:40:24.692 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:24 22:40:24.693 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:24 22:40:24.695 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:24 22:40:24.695 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:24 22:40:24.696 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 22:40:24 22:40:24.697 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:25 22:40:25.202 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:25 22:40:25.204 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:25 22:40:25.204 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":506.528,"requestQueueTimeMs":0.253,"localTimeMs":1.505,"remoteTimeMs":504.115,"throttleTimeMs":0,"responseQueueTimeMs":0.217,"sendTimeMs":0.437,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:25 22:40:25.205 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:25 22:40:25.207 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:25 22:40:25.208 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:25 22:40:25.209 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:25 22:40:25.209 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 22:40:25 22:40:25.211 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:25 22:40:25.379 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:25 22:40:25.382 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=36) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null) 22:40:25 22:40:25.389 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:25 22:40:25.394 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":36,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":10.428,"requestQueueTimeMs":2.686,"localTimeMs":7.327,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.106,"sendTimeMs":0.307,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:25 22:40:25.395 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=36): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 22:40:25 22:40:25.396 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful Heartbeat response 22:40:25 22:40:25.718 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:25 22:40:25.721 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:25 22:40:25.722 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:25 22:40:25.722 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":510.907,"requestQueueTimeMs":0.209,"localTimeMs":1.799,"remoteTimeMs":508.232,"throttleTimeMs":0,"responseQueueTimeMs":0.196,"sendTimeMs":0.469,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:25 22:40:25.723 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:25 22:40:25.724 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:25 22:40:25.724 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:25 22:40:25.724 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=37) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 22:40:25 22:40:25.725 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:26 22:40:26.227 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:26 22:40:26.228 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=37): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:26 22:40:26.228 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:26 22:40:26.228 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":37,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.343,"requestQueueTimeMs":0.257,"localTimeMs":1.269,"remoteTimeMs":501.33,"throttleTimeMs":0,"responseQueueTimeMs":0.163,"sendTimeMs":0.322,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:26 22:40:26.229 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:26 22:40:26.229 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:26 22:40:26.229 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:26 22:40:26.229 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=38) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 22:40:26 22:40:26.230 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:26 22:40:26.732 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:26 22:40:26.733 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=38): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:26 22:40:26.733 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:26 22:40:26.734 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":38,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.202,"requestQueueTimeMs":0.222,"localTimeMs":1.009,"remoteTimeMs":501.453,"throttleTimeMs":0,"responseQueueTimeMs":0.172,"sendTimeMs":0.344,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:26 22:40:26.734 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:26 22:40:26.734 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:26 22:40:26.735 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:26 22:40:26.735 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 22:40:26 22:40:26.736 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:27 22:40:27.239 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:27 22:40:27.241 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:27 22:40:27.241 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:27 22:40:27.242 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:27 22:40:27.242 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:27 22:40:27.242 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:27 22:40:27.243 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=40) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 22:40:27 22:40:27.244 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":505.292,"requestQueueTimeMs":0.213,"localTimeMs":1.388,"remoteTimeMs":502.767,"throttleTimeMs":0,"responseQueueTimeMs":0.315,"sendTimeMs":0.607,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:27 22:40:27.246 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:27 22:40:27.504 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:27 22:40:27.506 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=41) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 22:40:27 22:40:27.518 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:27 22:40:27.523 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1731192022484, current time: 1731192027523,unflushed: 1 22:40:27 22:40:27.560 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 22:40:27 22:40:27.560 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 38 ms 22:40:27 22:40:27.572 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=41): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 22:40:27 22:40:27.573 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 22:40:27 22:40:27.573 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:27 22:40:27.573 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":41,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":65.176,"requestQueueTimeMs":5.677,"localTimeMs":58.826,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.11,"sendTimeMs":0.563,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:27 22:40:27.749 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:27 22:40:27.750 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=40): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:27 22:40:27.750 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:27 22:40:27.751 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":40,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":505.07,"requestQueueTimeMs":1.174,"localTimeMs":1.368,"remoteTimeMs":501.82,"throttleTimeMs":0,"responseQueueTimeMs":0.199,"sendTimeMs":0.507,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:27 22:40:27.751 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:27 22:40:27.752 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:27 22:40:27.752 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:27 22:40:27.752 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 22:40:27 22:40:27.756 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:28 22:40:28.259 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:28 22:40:28.261 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:28 22:40:28.261 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:28 22:40:28.261 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.632,"requestQueueTimeMs":0.351,"localTimeMs":1.365,"remoteTimeMs":502.174,"throttleTimeMs":0,"responseQueueTimeMs":0.164,"sendTimeMs":0.575,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:28 22:40:28.262 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:28 22:40:28.262 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:28 22:40:28.262 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:28 22:40:28.262 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=43) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 22:40:28 22:40:28.263 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:28 22:40:28.380 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:28 22:40:28.381 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=44) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null) 22:40:28 22:40:28.382 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:28 22:40:28.384 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=44): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 22:40:28 22:40:28.384 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful Heartbeat response 22:40:28 22:40:28.384 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":44,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":2.074,"requestQueueTimeMs":0.284,"localTimeMs":1.323,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.134,"sendTimeMs":0.331,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:28 22:40:28.766 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:28 22:40:28.767 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=43): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:28 22:40:28.768 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:28 22:40:28.768 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:28 22:40:28.768 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:28 22:40:28.768 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:28 22:40:28.769 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 22:40:28 22:40:28.769 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":43,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.548,"requestQueueTimeMs":0.326,"localTimeMs":1.464,"remoteTimeMs":501.991,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.606,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:28 22:40:28.770 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:29 22:40:29.209 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:29 22:40:29.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 22:40:29 22:40:29.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 22:40:29 22:40:29.210 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000001ca2b0000 after 1ms. 22:40:29 22:40:29.272 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:29 22:40:29.273 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:29 22:40:29.274 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:29 22:40:29.274 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:29 22:40:29.274 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:29 22:40:29.274 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:29 22:40:29.275 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.113,"requestQueueTimeMs":0.304,"localTimeMs":1.462,"remoteTimeMs":501.638,"throttleTimeMs":0,"responseQueueTimeMs":0.208,"sendTimeMs":0.499,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:29 22:40:29.275 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=46) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 22:40:29 22:40:29.276 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:29 22:40:29.779 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:29 22:40:29.780 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=46): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:29 22:40:29.780 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:29 22:40:29.781 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:29 22:40:29.781 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:29 22:40:29.781 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:29 22:40:29.781 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=47) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 22:40:29 22:40:29.782 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":46,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":505.295,"requestQueueTimeMs":0.277,"localTimeMs":1.968,"remoteTimeMs":501.655,"throttleTimeMs":0,"responseQueueTimeMs":0.131,"sendTimeMs":1.263,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:29 22:40:29.783 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:30 22:40:30.286 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:30 22:40:30.287 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=47): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:30 22:40:30.287 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:30 22:40:30.288 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:30 22:40:30.288 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:30 22:40:30.288 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":47,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.467,"requestQueueTimeMs":0.225,"localTimeMs":1.778,"remoteTimeMs":501.768,"throttleTimeMs":0,"responseQueueTimeMs":0.171,"sendTimeMs":0.522,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:30 22:40:30.288 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:30 22:40:30.289 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 22:40:30 22:40:30.291 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:30 22:40:30.793 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:30 22:40:30.795 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:30 22:40:30.795 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:30 22:40:30.795 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.427,"requestQueueTimeMs":0.249,"localTimeMs":1.595,"remoteTimeMs":501.909,"throttleTimeMs":0,"responseQueueTimeMs":0.184,"sendTimeMs":0.488,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:30 22:40:30.796 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:30 22:40:30.796 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:30 22:40:30.796 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:30 22:40:30.796 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 22:40:30 22:40:30.798 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:30 22:40:30.937 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.947 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.947 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.947 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.948 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.949 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.950 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.951 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.951 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.951 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.951 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.951 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.952 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.953 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.954 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.954 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.954 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:30 22:40:30.954 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1731192030929 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 22:40:31 22:40:31.300 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:31 22:40:31.301 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:31 22:40:31.302 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:31 22:40:31.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.591,"requestQueueTimeMs":0.258,"localTimeMs":1.242,"remoteTimeMs":501.369,"throttleTimeMs":0,"responseQueueTimeMs":0.205,"sendTimeMs":0.515,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:31 22:40:31.302 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:31 22:40:31.302 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:31 22:40:31.303 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:31 22:40:31.303 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=50) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 22:40:31 22:40:31.305 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:31 22:40:31.380 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:31 22:40:31.381 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=51) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null) 22:40:31 22:40:31.382 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:31 22:40:31.384 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":51,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":1.745,"requestQueueTimeMs":0.161,"localTimeMs":1.21,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.152,"sendTimeMs":0.22,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:31 22:40:31.384 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=51): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 22:40:31 22:40:31.384 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful Heartbeat response 22:40:31 22:40:31.808 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:31 22:40:31.809 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=50): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:31 22:40:31.809 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":50,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.503,"requestQueueTimeMs":0.55,"localTimeMs":1.229,"remoteTimeMs":502.178,"throttleTimeMs":0,"responseQueueTimeMs":0.119,"sendTimeMs":0.425,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:31 22:40:31.809 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:31 22:40:31.810 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:31 22:40:31.810 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:31 22:40:31.810 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:31 22:40:31.810 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=52) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 22:40:31 22:40:31.812 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:32 22:40:32.314 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:32 22:40:32.315 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=52): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:32 22:40:32.315 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:32 22:40:32.315 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":52,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.437,"requestQueueTimeMs":0.206,"localTimeMs":1.132,"remoteTimeMs":501.652,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.322,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:32 22:40:32.316 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:32 22:40:32.316 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:32 22:40:32.316 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:32 22:40:32.316 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 22:40:32 22:40:32.318 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:32 22:40:32.503 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:32 22:40:32.503 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=54) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 22:40:32 22:40:32.505 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:32 22:40:32.508 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1731192027560, current time: 1731192032508,unflushed: 1 22:40:32 22:40:32.521 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 22:40:32 22:40:32.521 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 14 ms 22:40:32 22:40:32.523 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=54): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 22:40:32 22:40:32.524 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 22:40:32 22:40:32.524 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":54,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":19.459,"requestQueueTimeMs":0.241,"localTimeMs":18.846,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.28,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:32 22:40:32.524 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:32 22:40:32.820 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:32 22:40:32.821 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:32 22:40:32.821 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.842,"requestQueueTimeMs":0.235,"localTimeMs":1.451,"remoteTimeMs":501.853,"throttleTimeMs":0,"responseQueueTimeMs":0.102,"sendTimeMs":0.2,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:32 22:40:32.822 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:32 22:40:32.822 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:32 22:40:32.822 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:32 22:40:32.822 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:32 22:40:32.823 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=55) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 22:40:32 22:40:32.824 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:33 22:40:33.326 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:33 22:40:33.327 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=55): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:33 22:40:33.328 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:33 22:40:33.328 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":55,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.853,"requestQueueTimeMs":0.261,"localTimeMs":1.504,"remoteTimeMs":501.583,"throttleTimeMs":0,"responseQueueTimeMs":0.127,"sendTimeMs":0.377,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:33 22:40:33.334 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:33 22:40:33.334 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:33 22:40:33.334 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:33 22:40:33.334 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=56) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 22:40:33 22:40:33.335 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:33 22:40:33.838 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:33 22:40:33.839 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=56): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:33 22:40:33.839 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":56,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.72,"requestQueueTimeMs":0.194,"localTimeMs":1.157,"remoteTimeMs":501.847,"throttleTimeMs":0,"responseQueueTimeMs":0.203,"sendTimeMs":0.318,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:33 22:40:33.839 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:33 22:40:33.840 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:33 22:40:33.840 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:33 22:40:33.840 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:33 22:40:33.841 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 22:40:33 22:40:33.842 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:34 22:40:34.345 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:34 22:40:34.347 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.702,"requestQueueTimeMs":0.211,"localTimeMs":1.626,"remoteTimeMs":502.468,"throttleTimeMs":0,"responseQueueTimeMs":0.121,"sendTimeMs":0.273,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:34 22:40:34.347 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:34 22:40:34.347 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:34 22:40:34.348 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:34 22:40:34.348 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:34 22:40:34.348 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:34 22:40:34.349 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 22:40:34 22:40:34.350 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:34 22:40:34.381 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:34 22:40:34.381 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=59) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null) 22:40:34 22:40:34.382 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:34 22:40:34.383 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=59): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 22:40:34 22:40:34.383 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":59,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":1.75,"requestQueueTimeMs":0.193,"localTimeMs":1.208,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.256,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:34 22:40:34.384 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful Heartbeat response 22:40:34 22:40:34.853 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:34 22:40:34.854 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:34 22:40:34.855 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.581,"requestQueueTimeMs":0.22,"localTimeMs":1.611,"remoteTimeMs":502.305,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:34 22:40:34.855 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:34 22:40:34.855 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:34 22:40:34.856 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:34 22:40:34.856 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:34 22:40:34.856 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 22:40:34 22:40:34.858 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:35 22:40:35.360 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:35 22:40:35.361 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.361,"requestQueueTimeMs":0.288,"localTimeMs":1.402,"remoteTimeMs":501.256,"throttleTimeMs":0,"responseQueueTimeMs":0.176,"sendTimeMs":0.237,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:35 22:40:35.361 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:35 22:40:35.361 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:35 22:40:35.362 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:35 22:40:35.363 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:35 22:40:35.363 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:35 22:40:35.363 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 22:40:35 22:40:35.366 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:35 22:40:35.868 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:35 22:40:35.869 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:35 22:40:35.870 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.692,"requestQueueTimeMs":1.07,"localTimeMs":1.766,"remoteTimeMs":501.294,"throttleTimeMs":0,"responseQueueTimeMs":0.226,"sendTimeMs":0.334,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:35 22:40:35.870 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:35 22:40:35.871 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:35 22:40:35.871 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:35 22:40:35.872 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:35 22:40:35.872 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=62) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 22:40:35 22:40:35.873 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:36 22:40:36.375 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:36 22:40:36.377 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=62): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:36 22:40:36.377 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:36 22:40:36.377 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":62,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":503.559,"requestQueueTimeMs":0.217,"localTimeMs":1.607,"remoteTimeMs":501.241,"throttleTimeMs":0,"responseQueueTimeMs":0.197,"sendTimeMs":0.294,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:36 22:40:36.378 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:36 22:40:36.378 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:36 22:40:36.378 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:36 22:40:36.378 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=63) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 22:40:36 22:40:36.379 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:36 22:40:36.882 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:36 22:40:36.883 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=63): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:36 22:40:36.884 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":63,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":504.4,"requestQueueTimeMs":0.262,"localTimeMs":1.241,"remoteTimeMs":502.471,"throttleTimeMs":0,"responseQueueTimeMs":0.108,"sendTimeMs":0.316,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:36 22:40:36.884 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:36 22:40:36.884 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:36 22:40:36.885 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:36 22:40:36.885 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:36 22:40:36.885 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 22:40:36 22:40:36.887 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:37 22:40:37.384 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f to coordinator localhost:43439 (id: 2147483646 rack: null) 22:40:37 22:40:37.384 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=65) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null) 22:40:37 22:40:37.387 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:37 22:40:37.392 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:37 22:40:37.393 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=65): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 22:40:37 22:40:37.393 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received successful Heartbeat response 22:40:37 22:40:37.393 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":65,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":7.828,"requestQueueTimeMs":0.309,"localTimeMs":6.562,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.648,"sendTimeMs":0.306,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:37 22:40:37.394 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":507.675,"requestQueueTimeMs":0.194,"localTimeMs":1.536,"remoteTimeMs":504.741,"throttleTimeMs":0,"responseQueueTimeMs":1.046,"sendTimeMs":0.157,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:37 22:40:37.394 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 22:40:37 22:40:37.395 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:37 22:40:37.504 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:37 22:40:37.504 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=67) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 22:40:37 22:40:37.505 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f) unblocked 1 Heartbeat operations 22:40:37 22:40:37.507 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1731192032521, current time: 1731192037507,unflushed: 1 22:40:37 22:40:37.548 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 22:40:37 22:40:37.548 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 42 ms 22:40:37 22:40:37.549 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=67): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 22:40:37 22:40:37.549 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":67,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415-a2f2c937-3d5e-4b15-b8e4-838bf2b3bb3f","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47584-3","totalTimeMs":44.206,"requestQueueTimeMs":0.225,"localTimeMs":43.64,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.14,"sendTimeMs":0.2,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:37 22:40:37.549 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 22:40:37 22:40:37.550 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 22:40:37 22:40:37.896 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 0 partition(s) 22:40:37 22:40:37.897 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=66): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[]) 22:40:37 22:40:37.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":66,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":502.382,"requestQueueTimeMs":0.168,"localTimeMs":0.909,"remoteTimeMs":500.884,"throttleTimeMs":0,"responseQueueTimeMs":0.1,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:37 22:40:37.898 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 0 response partition(s), 1 implied partition(s) 22:40:37 22:40:37.898 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:37 22:40:37.898 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:37 22:40:37.898 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:37 22:40:37.899 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=68) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 22:40:37 22:40:37.900 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 22:40:38 22:40:38.183 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 22:40:38 acks = -1 22:40:38 batch.size = 16384 22:40:38 bootstrap.servers = [SASL_PLAINTEXT://localhost:43439] 22:40:38 buffer.memory = 33554432 22:40:38 client.dns.lookup = use_all_dns_ips 22:40:38 client.id = mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb 22:40:38 compression.type = none 22:40:38 connections.max.idle.ms = 540000 22:40:38 delivery.timeout.ms = 120000 22:40:38 enable.idempotence = true 22:40:38 interceptor.classes = [] 22:40:38 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:38 linger.ms = 0 22:40:38 max.block.ms = 60000 22:40:38 max.in.flight.requests.per.connection = 5 22:40:38 max.request.size = 1048576 22:40:38 metadata.max.age.ms = 300000 22:40:38 metadata.max.idle.ms = 300000 22:40:38 metric.reporters = [] 22:40:38 metrics.num.samples = 2 22:40:38 metrics.recording.level = INFO 22:40:38 metrics.sample.window.ms = 30000 22:40:38 partitioner.adaptive.partitioning.enable = true 22:40:38 partitioner.availability.timeout.ms = 0 22:40:38 partitioner.class = null 22:40:38 partitioner.ignore.keys = false 22:40:38 receive.buffer.bytes = 32768 22:40:38 reconnect.backoff.max.ms = 1000 22:40:38 reconnect.backoff.ms = 50 22:40:38 request.timeout.ms = 30000 22:40:38 retries = 2147483647 22:40:38 retry.backoff.ms = 100 22:40:38 sasl.client.callback.handler.class = null 22:40:38 sasl.jaas.config = [hidden] 22:40:38 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:38 sasl.kerberos.min.time.before.relogin = 60000 22:40:38 sasl.kerberos.service.name = null 22:40:38 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:38 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:38 sasl.login.callback.handler.class = null 22:40:38 sasl.login.class = null 22:40:38 sasl.login.connect.timeout.ms = null 22:40:38 sasl.login.read.timeout.ms = null 22:40:38 sasl.login.refresh.buffer.seconds = 300 22:40:38 sasl.login.refresh.min.period.seconds = 60 22:40:38 sasl.login.refresh.window.factor = 0.8 22:40:38 sasl.login.refresh.window.jitter = 0.05 22:40:38 sasl.login.retry.backoff.max.ms = 10000 22:40:38 sasl.login.retry.backoff.ms = 100 22:40:38 sasl.mechanism = PLAIN 22:40:38 sasl.oauthbearer.clock.skew.seconds = 30 22:40:38 sasl.oauthbearer.expected.audience = null 22:40:38 sasl.oauthbearer.expected.issuer = null 22:40:38 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:38 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:38 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:38 sasl.oauthbearer.jwks.endpoint.url = null 22:40:38 sasl.oauthbearer.scope.claim.name = scope 22:40:38 sasl.oauthbearer.sub.claim.name = sub 22:40:38 sasl.oauthbearer.token.endpoint.url = null 22:40:38 security.protocol = SASL_PLAINTEXT 22:40:38 security.providers = null 22:40:38 send.buffer.bytes = 131072 22:40:38 socket.connection.setup.timeout.max.ms = 30000 22:40:38 socket.connection.setup.timeout.ms = 10000 22:40:38 ssl.cipher.suites = null 22:40:38 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:38 ssl.endpoint.identification.algorithm = https 22:40:38 ssl.engine.factory.class = null 22:40:38 ssl.key.password = null 22:40:38 ssl.keymanager.algorithm = SunX509 22:40:38 ssl.keystore.certificate.chain = null 22:40:38 ssl.keystore.key = null 22:40:38 ssl.keystore.location = null 22:40:38 ssl.keystore.password = null 22:40:38 ssl.keystore.type = JKS 22:40:38 ssl.protocol = TLSv1.3 22:40:38 ssl.provider = null 22:40:38 ssl.secure.random.implementation = null 22:40:38 ssl.trustmanager.algorithm = PKIX 22:40:38 ssl.truststore.certificates = null 22:40:38 ssl.truststore.location = null 22:40:38 ssl.truststore.password = null 22:40:38 ssl.truststore.type = JKS 22:40:38 transaction.timeout.ms = 60000 22:40:38 transactional.id = null 22:40:38 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:38 22:40:38 22:40:38.196 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Instantiated an idempotent producer. 22:40:38 22:40:38.213 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:38 22:40:38.213 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:38 22:40:38.213 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192038213 22:40:38 22:40:38.213 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Kafka producer started 22:40:38 22:40:38.215 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Starting Kafka producer I/O thread. 22:40:38 22:40:38.217 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Transition from state UNINITIALIZED to INITIALIZING 22:40:38 22:40:38.221 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:38 22:40:38.221 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: -1 rack: null) for sending metadata request 22:40:38 22:40:38.222 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.222 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.222 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.222 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.223 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:38250 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:38 22:40:38.223 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:38250 22:40:38 22:40:38.227 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 22:40:38 22:40:38.227 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:38 22:40:38.227 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Completed connection to node -1. Fetching API versions. 22:40:38 22:40:38.227 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:38 22:40:38.227 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:38 22:40:38.228 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:38 22:40:38.228 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:38 22:40:38.229 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:38 22:40:38.229 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:38 22:40:38.229 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:38 22:40:38.229 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to INITIAL 22:40:38 22:40:38.229 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to INTERMEDIATE 22:40:38 22:40:38.229 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:38 22:40:38.229 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:38 22:40:38.230 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:38 22:40:38.230 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:38 22:40:38.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to COMPLETE 22:40:38 22:40:38.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Finished authentication with no session expiration and no session re-authentication 22:40:38 22:40:38.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Successfully authenticated with localhost/127.0.0.1 22:40:38 22:40:38.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating API versions fetch from node -1. 22:40:38 22:40:38.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:38 22:40:38.233 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:38250-4","totalTimeMs":1.059,"requestQueueTimeMs":0.212,"localTimeMs":0.669,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.049,"sendTimeMs":0.127,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:38 22:40:38.234 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:38 22:40:38.235 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:38 22:40:38.235 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: -1 rack: null) 22:40:38 22:40:38.235 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:38 22:40:38.235 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:43439 (id: -1 rack: null) with correlation ID 2 22:40:38 22:40:38.235 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:38 22:40:38.237 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":43439,"rack":null}],"clusterId":"OB576afJSxuIrhF9nQXaaA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"PRLD570ERdK36hsbCawlJA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:43439-127.0.0.1:38250-4","totalTimeMs":1.522,"requestQueueTimeMs":0.139,"localTimeMs":1.196,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.082,"sendTimeMs":0.103,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:38 22:40:38.239 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=43439, rack=null)], clusterId='OB576afJSxuIrhF9nQXaaA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 22:40:38 22:40:38.239 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to PRLD570ERdK36hsbCawlJA 22:40:38 22:40:38.240 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Cluster ID: OB576afJSxuIrhF9nQXaaA 22:40:38 22:40:38.240 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='OB576afJSxuIrhF9nQXaaA', nodes={1=localhost:43439 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:43439 (id: 1 rack: null)} 22:40:38 22:40:38.240 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 22:40:38 22:40:38.244 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.244 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.244 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.244 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.245 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:38252 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:38 22:40:38.245 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:38252 22:40:38 22:40:38.248 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 22:40:38 22:40:38.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:38 22:40:38.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:38 22:40:38.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:38 22:40:38.248 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:38 22:40:38.248 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 22:40:38 22:40:38.249 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:38 22:40:38.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:38 22:40:38.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:38 22:40:38.249 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 22:40:38 22:40:38.250 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:38 22:40:38.250 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:38 22:40:38.250 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 22:40:38 22:40:38.250 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 22:40:38 22:40:38.251 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:38 22:40:38.252 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:38252-4","totalTimeMs":0.846,"requestQueueTimeMs":0.197,"localTimeMs":0.51,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.045,"sendTimeMs":0.092,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:38 22:40:38.253 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:38 22:40:38.253 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:38 22:40:38.260 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 22:40:38 22:40:38.266 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:38 22:40:38.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:getData cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 22:40:38 22:40:38.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:getData cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 22:40:38 22:40:38.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 22:40:38 22:40:38.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:38 ] 22:40:38 22:40:38.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:38 , 'ip,'127.0.0.1 22:40:38 ] 22:40:38 22:40:38.267 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:38 22:40:38.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 22:40:38 22:40:38.267 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 22:40:38 22:40:38.267 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 256,4 replyHeader:: 256,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1731192015217,1731192015217,0,0,0,0,0,0,15} 22:40:38 22:40:38.268 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000001ca2b0000 after 1ms. 22:40:38 22:40:38.268 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 22:40:38 22:40:38.269 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000001ca2b0000 22:40:38 22:40:38.269 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 22:40:38 22:40:38.269 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 22:40:38 ] 22:40:38 22:40:38.269 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 22:40:38 , 'ip,'127.0.0.1 22:40:38 ] 22:40:38 22:40:38.270 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 278819852263 22:40:38 22:40:38.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:setData cxid:0x101 zxid:0x8c txntype:5 reqpath:n/a 22:40:38 22:40:38.272 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 22:40:38 22:40:38.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 280621823082 22:40:38 22:40:38.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:setData cxid:0x101 zxid:0x8c txntype:5 reqpath:n/a 22:40:38 22:40:38.273 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 257,5 replyHeader:: 257,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1731192015217,1731192038269,1,0,0,0,60,0,15} 22:40:38 22:40:38.274 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 22:40:38 22:40:38.274 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 22:40:38 22:40:38.277 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 22:40:38 22:40:38.277 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:43439-127.0.0.1:38252-4","totalTimeMs":15.766,"requestQueueTimeMs":1.199,"localTimeMs":1.071,"remoteTimeMs":13.106,"throttleTimeMs":0,"responseQueueTimeMs":0.181,"sendTimeMs":0.208,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:38 22:40:38.278 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 22:40:38 22:40:38.280 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 22:40:38 22:40:38.280 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:38250-4","totalTimeMs":43.125,"requestQueueTimeMs":1.112,"localTimeMs":41.846,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.045,"sendTimeMs":0.12,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:38 22:40:38.280 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] ProducerId set to 0 with epoch 0 22:40:38 22:40:38.281 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Transition from state INITIALIZING to READY 22:40:38 22:40:38.282 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.282 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.282 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.282 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.282 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:38258 22:40:38 22:40:38.282 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:38258 on /127.0.0.1:43439 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:38 22:40:38.284 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 22:40:38 22:40:38.284 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:38 22:40:38.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:38 22:40:38.284 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:38 22:40:38.284 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Completed connection to node 1. Fetching API versions. 22:40:38 22:40:38.285 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:38 22:40:38.285 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:38 22:40:38.285 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:38 22:40:38.285 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:38 22:40:38.285 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:38 22:40:38.285 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to INITIAL 22:40:38 22:40:38.285 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to INTERMEDIATE 22:40:38 22:40:38.286 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:38 22:40:38.286 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:38 22:40:38.286 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:38 22:40:38.286 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:38 22:40:38.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to COMPLETE 22:40:38 22:40:38.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Finished authentication with no session expiration and no session re-authentication 22:40:38 22:40:38.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Successfully authenticated with localhost/127.0.0.1 22:40:38 22:40:38.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating API versions fetch from node 1. 22:40:38 22:40:38.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 22:40:38 22:40:38.291 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 22:40:38 22:40:38.292 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 22:40:38 22:40:38.292 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:43439-127.0.0.1:38258-5","totalTimeMs":4.754,"requestQueueTimeMs":0.158,"localTimeMs":0.502,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":3.894,"sendTimeMs":0.198,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:38 22:40:38.294 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 22:40:38 22:40:38.294 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 22:40:38 22:40:38.297 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 22:40:38 22:40:38.326 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1731192017997, current time: 1731192038326,unflushed: 3 22:40:38 22:40:38.329 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 22:40:38 22:40:38.330 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 27 ms 22:40:38 22:40:38.341 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 22:40:38 22:40:38.342 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:43439-127.0.0.1:38258-5","totalTimeMs":43.821,"requestQueueTimeMs":2.704,"localTimeMs":40.552,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.199,"sendTimeMs":0.364,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:38 22:40:38.346 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 22:40:38 22:40:38.347 [data-plane-kafka-request-handler-0] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1356245021 returning 1 partition(s) 22:40:38 22:40:38.351 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 22:40:38 22:40:38.355 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=68): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1356245021, responses=[FetchableTopicResponse(topic='', topicId=PRLD570ERdK36hsbCawlJA, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 22:40:38 22:40:38.355 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":68,"clientId":"mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1356245021,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1356245021,"responses":[{"topicId":"PRLD570ERdK36hsbCawlJA","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:43439-127.0.0.1:47582-3","totalTimeMs":455.26,"requestQueueTimeMs":0.2,"localTimeMs":1.413,"remoteTimeMs":450.3,"throttleTimeMs":0,"responseQueueTimeMs":0.073,"sendTimeMs":3.272,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 22:40:38 22:40:38.355 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1356245021 with 1 response partition(s) 22:40:38 22:40:38.355 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 22:40:38 22:40:38.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:43439 (id: 1 rack: null)], epoch=0}} to node localhost:43439 (id: 1 rack: null) 22:40:38 22:40:38.357 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Built incremental fetch (sessionId=1356245021, epoch=31) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 22:40:38 22:40:38.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:43439 (id: 1 rack: null) 22:40:38 22:40:38.357 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=69) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 22:40:38 22:40:38.358 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1356245021, epoch 32: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 22:40:38 22:40:38.375 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 22:40:38 22:40:38.375 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 22:40:38 22:40:38.377 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.377 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.377 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.378 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.378 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:38264 on /127.0.0.1:43439 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 22:40:38 22:40:38.378 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 22:40:38 22:40:38.378 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 22:40:38 22:40:38.378 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 22:40:38 22:40:38.379 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:38264 22:40:38 22:40:38.379 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 22:40:38 22:40:38.379 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 22:40:38 22:40:38.380 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 22:40:38 22:40:38.383 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 22:40:38 22:40:38.384 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 22:40:38 22:40:38.386 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 22:40:38 22:40:38.386 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 22:40:38 22:40:38.386 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 22:40:38 22:40:38.387 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 22:40:38 22:40:38.387 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 22:40:38 22:40:38.387 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 22:40:38 22:40:38.387 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 22:40:38 22:40:38.387 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 22:40:38 22:40:38.387 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 22:40:38 22:40:38.387 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 22:40:38 22:40:38.387 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 22:40:38 22:40:38.388 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 22:40:38 22:40:38.392 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 22:40:38 22:40:38.392 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 22:40:38 22:40:38.393 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 22:40:38 22:40:38.396 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 22:40:38 22:40:38.400 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 22:40:38 22:40:38.401 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:43439-127.0.0.1:38264-5","totalTimeMs":11.902,"requestQueueTimeMs":1.318,"localTimeMs":1.512,"remoteTimeMs":8.783,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.209,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 22:40:38 22:40:38.401 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 12ms 22:40:38 22:40:38.401 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:43439-127.0.0.1:38264-5) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at kafka.network.Processor.poll(SocketServer.scala:1055) 22:40:38 at kafka.network.Processor.run(SocketServer.scala:959) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.403 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 22:40:38 22:40:38.404 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 22:40:38 22:40:38.405 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 22:40:38 22:40:38.406 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 22:40:38 22:40:38.407 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-43439] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 22:40:38 22:40:38.408 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 22:40:38 22:40:38.408 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 22:40:38 22:40:38.409 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:38250-4 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:47566-2 22:40:38 22:40:38.410 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:47524-0 22:40:38 22:40:38.410 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node -1 disconnected. 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:38252-4 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:47582-3 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:38258-5 22:40:38 22:40:38.410 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:43439-127.0.0.1:47584-3 22:40:38 22:40:38.412 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:43439 (id: 1 rack: null) 22:40:38 22:40:38.412 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb, correlationId=5) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:38 22:40:38.412 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.412 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:38 22:40:38.412 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Cancelled in-flight METADATA request with correlation id 5 due to node 1 being disconnected (elapsed time since creation: 1ms, elapsed time since send: 1ms, request timeout: 30000ms): MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 22:40:38 22:40:38.413 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) 22:40:38 at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) 22:40:38 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 22:40:38 22:40:38.413 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 22:40:38 22:40:38.416 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 22:40:38 22:40:38.418 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 22:40:38 22:40:38.418 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 22:40:38 22:40:38.418 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 22:40:38 22:40:38.421 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 22:40:38 22:40:38.422 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 22:40:38 22:40:38.425 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 22:40:38 22:40:38.426 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 22:40:38 22:40:38.426 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 22:40:38 22:40:38.428 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 22:40:38 22:40:38.429 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 22:40:38 22:40:38.429 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 22:40:38 22:40:38.429 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 22:40:38 22:40:38.432 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 22:40:38 22:40:38.433 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 22:40:38 22:40:38.434 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 22:40:38 22:40:38.434 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 22:40:38 22:40:38.435 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 22:40:38 22:40:38.435 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 22:40:38 22:40:38.437 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 22:40:38 22:40:38.438 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 22:40:38 22:40:38.438 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 22:40:38 22:40:38.439 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 22:40:38 22:40:38.440 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 22:40:38 22:40:38.440 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 22:40:38 22:40:38.441 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 22:40:38 22:40:38.441 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 22:40:38 22:40:38.441 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 22:40:38 22:40:38.443 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 22:40:38 22:40:38.444 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 22:40:38 22:40:38.444 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 22:40:38 22:40:38.445 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 22:40:38 22:40:38.445 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 22:40:38 22:40:38.446 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 22:40:38 22:40:38.447 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 22:40:38 22:40:38.448 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 22:40:38 22:40:38.448 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 22:40:38 22:40:38.448 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 22:40:38 22:40:38.449 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 22:40:38 22:40:38.449 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 22:40:38 22:40:38.450 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 22:40:38 22:40:38.451 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 22:40:38 22:40:38.452 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 22:40:38 22:40:38.452 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 22:40:38 22:40:38.456 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 22:40:38 22:40:38.456 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 22:40:38 22:40:38.456 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 22:40:38 22:40:38.457 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 22:40:38 22:40:38.457 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 22:40:38 22:40:38.466 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 22:40:38 22:40:38.466 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 22:40:38 22:40:38.467 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 22:40:38 22:40:38.467 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 22:40:38 22:40:38.468 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:38 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:38 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:38 java.io.EOFException: null 22:40:38 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 22:40:38 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:38 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:38 22:40:38.469 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 69 due to node 1 being disconnected (elapsed time since creation: 112ms, elapsed time since send: 112ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1356245021, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=PRLD570ERdK36hsbCawlJA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node -1 disconnected. 22:40:38 22:40:38.469 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 22:40:38 22:40:38.469 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 2147483646 disconnected. 22:40:38 22:40:38.470 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 22:40:38 22:40:38.470 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 22:40:38 22:40:38.470 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, correlationId=69) due to node 1 being disconnected 22:40:38 22:40:38.470 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 22:40:38 22:40:38.471 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Error sending fetch request (sessionId=1356245021, epoch=31) to node 1: 22:40:38 org.apache.kafka.common.errors.DisconnectException: null 22:40:38 22:40:38.471 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Group coordinator localhost:43439 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 22:40:38 22:40:38.471 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:38 22:40:38.471 [main] INFO kafka.log.LogManager - Shutting down. 22:40:38 22:40:38.472 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 22:40:38 22:40:38.473 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 22:40:38 22:40:38.473 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 22:40:38 22:40:38.473 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 22:40:38 22:40:38.476 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit3067120233997490679 22:40:38 22:40:38.479 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018931, current time: 1731192038479,unflushed: 0 22:40:38 22:40:38.480 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.481 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.484 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.485 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019189, current time: 1731192038485,unflushed: 0 22:40:38 22:40:38.487 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.487 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.487 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.487 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019084, current time: 1731192038487,unflushed: 0 22:40:38 22:40:38.489 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.489 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.489 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.490 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019180, current time: 1731192038490,unflushed: 0 22:40:38 22:40:38.491 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.491 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.491 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.492 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019095, current time: 1731192038492,unflushed: 0 22:40:38 22:40:38.493 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.493 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.493 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.494 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019070, current time: 1731192038494,unflushed: 0 22:40:38 22:40:38.496 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.496 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.496 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.496 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019196, current time: 1731192038496,unflushed: 0 22:40:38 22:40:38.498 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.498 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.498 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.499 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018653, current time: 1731192038499,unflushed: 0 22:40:38 22:40:38.500 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.500 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.500 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.501 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019165, current time: 1731192038501,unflushed: 0 22:40:38 22:40:38.503 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.503 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.503 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.503 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018606, current time: 1731192038503,unflushed: 0 22:40:38 22:40:38.505 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.505 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.505 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.505 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018809, current time: 1731192038505,unflushed: 0 22:40:38 22:40:38.506 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.507 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.507 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.507 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018589, current time: 1731192038507,unflushed: 0 22:40:38 22:40:38.509 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.509 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.509 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.510 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018564, current time: 1731192038510,unflushed: 0 22:40:38 22:40:38.511 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.512 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.512 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.512 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1731192037547, current time: 1731192038512,unflushed: 0 22:40:38 22:40:38.512 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:38 22:40:38.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.514 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.net.ConnectException: Connection refused 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:38 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.514 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:38 22:40:38.515 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:38 22:40:38.516 [log-closing-/tmp/kafka-unit3067120233997490679] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 3 ms. 22:40:38 22:40:38.517 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.517 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 22:40:38 22:40:38.518 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018999, current time: 1731192038518,unflushed: 0 22:40:38 22:40:38.519 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.520 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.520 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.520 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018706, current time: 1731192038520,unflushed: 0 22:40:38 22:40:38.521 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.521 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.521 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.522 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018666, current time: 1731192038522,unflushed: 0 22:40:38 22:40:38.523 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.523 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.524 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.524 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1731192038328, current time: 1731192038524,unflushed: 0 22:40:38 22:40:38.525 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.529 [log-closing-/tmp/kafka-unit3067120233997490679] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 4 ms. 22:40:38 22:40:38.529 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.529 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 22:40:38 22:40:38.529 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018536, current time: 1731192038529,unflushed: 0 22:40:38 22:40:38.531 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.531 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.531 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.531 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018819, current time: 1731192038531,unflushed: 0 22:40:38 22:40:38.533 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.533 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.533 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.534 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019125, current time: 1731192038534,unflushed: 0 22:40:38 22:40:38.535 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.535 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.535 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.535 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018857, current time: 1731192038535,unflushed: 0 22:40:38 22:40:38.537 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.537 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.537 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.537 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018865, current time: 1731192038537,unflushed: 0 22:40:38 22:40:38.539 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.539 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.539 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.539 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018682, current time: 1731192038539,unflushed: 0 22:40:38 22:40:38.540 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.541 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.541 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.541 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018796, current time: 1731192038541,unflushed: 0 22:40:38 22:40:38.542 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.542 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.543 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.543 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018951, current time: 1731192038543,unflushed: 0 22:40:38 22:40:38.544 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.544 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.544 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.545 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019156, current time: 1731192038545,unflushed: 0 22:40:38 22:40:38.546 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.546 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.546 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.546 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019174, current time: 1731192038546,unflushed: 0 22:40:38 22:40:38.548 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.548 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.548 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.548 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019014, current time: 1731192038548,unflushed: 0 22:40:38 22:40:38.555 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.555 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.556 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.557 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018783, current time: 1731192038557,unflushed: 0 22:40:38 22:40:38.559 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.559 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.559 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.560 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018576, current time: 1731192038560,unflushed: 0 22:40:38 22:40:38.563 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.563 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.563 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.564 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018643, current time: 1731192038564,unflushed: 0 22:40:38 22:40:38.566 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.566 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.566 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.567 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019115, current time: 1731192038567,unflushed: 0 22:40:38 22:40:38.571 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:38 22:40:38.572 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.572 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.572 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.572 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.573 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.573 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.573 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.573 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.net.ConnectException: Connection refused 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:38 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:38 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:38 22:40:38.573 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:38 22:40:38.574 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018846, current time: 1731192038573,unflushed: 0 22:40:38 22:40:38.574 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:38 22:40:38.574 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:38 22:40:38.575 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.575 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.575 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.575 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018551, current time: 1731192038575,unflushed: 0 22:40:38 22:40:38.580 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.580 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.580 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.581 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018893, current time: 1731192038581,unflushed: 0 22:40:38 22:40:38.582 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.582 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.582 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.583 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018598, current time: 1731192038583,unflushed: 0 22:40:38 22:40:38.584 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.584 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.584 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.584 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018907, current time: 1731192038584,unflushed: 0 22:40:38 22:40:38.586 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.586 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.586 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.586 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018755, current time: 1731192038586,unflushed: 0 22:40:38 22:40:38.587 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.588 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.588 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.588 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018970, current time: 1731192038588,unflushed: 0 22:40:38 22:40:38.589 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.590 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.590 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.590 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019147, current time: 1731192038590,unflushed: 0 22:40:38 22:40:38.591 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.591 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.591 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.592 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019203, current time: 1731192038592,unflushed: 0 22:40:38 22:40:38.593 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.593 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.593 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.594 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018634, current time: 1731192038594,unflushed: 0 22:40:38 22:40:38.595 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.595 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.595 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.596 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018720, current time: 1731192038596,unflushed: 0 22:40:38 22:40:38.597 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.597 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.597 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.597 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019107, current time: 1731192038597,unflushed: 0 22:40:38 22:40:38.599 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.599 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.599 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.599 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018770, current time: 1731192038599,unflushed: 0 22:40:38 22:40:38.600 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.601 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.601 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.601 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192019061, current time: 1731192038601,unflushed: 0 22:40:38 22:40:38.602 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.602 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.602 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.603 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018981, current time: 1731192038603,unflushed: 0 22:40:38 22:40:38.604 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.604 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.604 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.604 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018622, current time: 1731192038604,unflushed: 0 22:40:38 22:40:38.605 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.606 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.606 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.606 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018696, current time: 1731192038606,unflushed: 0 22:40:38 22:40:38.607 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.607 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.607 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.608 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit3067120233997490679] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1731192018962, current time: 1731192038608,unflushed: 0 22:40:38 22:40:38.609 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit3067120233997490679] Closing log 22:40:38 22:40:38.609 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 22:40:38 22:40:38.609 [log-closing-/tmp/kafka-unit3067120233997490679] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit3067120233997490679/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 22:40:38 22:40:38.610 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit3067120233997490679 22:40:38 22:40:38.614 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit3067120233997490679 22:40:38 22:40:38.614 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:38 22:40:38.614 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.614 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.615 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.615 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.616 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.net.ConnectException: Connection refused 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:38 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.616 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:38 22:40:38.616 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:38 22:40:38.620 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit3067120233997490679 22:40:38 22:40:38.622 [main] INFO kafka.log.LogManager - Shutdown complete. 22:40:38 22:40:38.622 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 22:40:38 22:40:38.622 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 22:40:38 22:40:38.622 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 22:40:38 22:40:38.623 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 22:40:38 22:40:38.623 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 22:40:38 22:40:38.625 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 22:40:38 22:40:38.627 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 22:40:38 22:40:38.628 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 22:40:38 22:40:38.629 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 22:40:38 22:40:38.629 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 22:40:38 22:40:38.629 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 22:40:38 22:40:38.634 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 22:40:38 22:40:38.634 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 22:40:38 22:40:38.634 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 22:40:38 22:40:38.634 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 22:40:38 22:40:38.635 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 22:40:38 22:40:38.635 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 22:40:38 22:40:38.635 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000001ca2b0000 22:40:38 22:40:38.635 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000001ca2b0000 22:40:38 22:40:38.637 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 280621823082 22:40:38 22:40:38.637 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 281759241650 22:40:38 22:40:38.637 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 277909266472 22:40:38 22:40:38.637 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280780471504 22:40:38 22:40:38.638 [ProcessThread(sid:0 cport:36225):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 279145264885 22:40:38 22:40:38.639 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000001ca2b0000 type:closeSession cxid:0x102 zxid:0x8d txntype:-11 reqpath:n/a 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000001ca2b0000 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000001ca2b0000 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000001ca2b0000 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 279145264885 22:40:38 22:40:38.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000001ca2b0000 type:closeSession cxid:0x102 zxid:0x8d txntype:-11 reqpath:n/a 22:40:38 22:40:38.640 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:38 22:40:38.640 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000001ca2b0000 22:40:38 22:40:38.640 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:38 22:40:38.640 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000001ca2b0000 22:40:38 22:40:38.640 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 22:40:38 22:40:38.641 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000001ca2b0000 22:40:38 22:40:38.642 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000001ca2b0000 22:40:38 22:40:38.642 [main-SendThread(127.0.0.1:36225)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000001ca2b0000, packet:: clientPath:null serverPath:null finished:false header:: 258,-11 replyHeader:: 258,141,0 request:: null response:: null 22:40:38 22:40:38.641 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:40122 which had sessionid 0x1000001ca2b0000 22:40:38 22:40:38.642 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000001ca2b0000 22:40:38 22:40:38.643 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 22:40:38 22:40:38.643 [main-SendThread(127.0.0.1:36225)] WARN org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x1000001ca2b0000. 22:40:38 org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1000001ca2b0000, likely server has closed socket 22:40:38 at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) 22:40:38 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) 22:40:38 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) 22:40:38 22:40:38.643 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 22:40:38 22:40:38.674 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:38 22:40:38.674 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:38 22:40:38.717 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:38 22:40:38.743 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 22:40:38 22:40:38.745 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000001ca2b0000 closed 22:40:38 22:40:38.745 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000001ca2b0000 22:40:38 22:40:38.747 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 22:40:38 22:40:38.747 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 22:40:38 22:40:38.751 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 22:40:38 22:40:38.751 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 22:40:38 22:40:38.751 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 22:40:38 22:40:38.751 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 22:40:38 22:40:38.751 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 22:40:38 22:40:38.751 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 22:40:38 22:40:38.752 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 22:40:38 22:40:38.752 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 22:40:38 22:40:38.752 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 22:40:38 22:40:38.752 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 22:40:38 22:40:38.752 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 22:40:38 22:40:38.753 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 22:40:38 22:40:38.782 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:38 22:40:38.782 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:38 22:40:38.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.784 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.net.ConnectException: Connection refused 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:38 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:38 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:38 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:38 22:40:38.784 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:38 22:40:38.784 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:38 22:40:38.785 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:38 22:40:38.813 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 22:40:38 22:40:38.813 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 22:40:38 22:40:38.813 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 22:40:38 22:40:38.813 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 22:40:38 22:40:38.815 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 22:40:38 22:40:38.815 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 22:40:38 22:40:38.815 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 22:40:38 22:40:38.815 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 22:40:38 22:40:38.817 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 22:40:38 22:40:38.819 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:36225] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 22:40:38 22:40:38.820 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 22:40:38 22:40:38.821 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 22:40:38 22:40:38.821 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 22:40:38 22:40:38.821 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 22:40:38 22:40:38.821 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 22:40:38 22:40:38.822 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 22:40:38 22:40:38.822 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 22:40:38 22:40:38.822 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 22:40:38 22:40:38.822 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 22:40:38 22:40:38.822 [ProcessThread(sid:0 cport:36225):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 22:40:38 22:40:38.822 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 22:40:38 22:40:38.823 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit4159603097563790/version-2/log.1 22:40:38 22:40:38.823 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit4159603097563790/version-2/log.1 22:40:38 22:40:38.832 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:38 22:40:38.833 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:38 22:40:38.833 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:38 22:40:38.833 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:38 22:40:38.833 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:38 22:40:38.832 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception 22:40:38 java.io.EOFException: Failed to read /tmp/kafka-unit4159603097563790/version-2/log.1 22:40:38 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) 22:40:38 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) 22:40:38 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) 22:40:38 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) 22:40:38 at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) 22:40:38 at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) 22:40:38 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) 22:40:38 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) 22:40:38 at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) 22:40:38 at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) 22:40:38 at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) 22:40:38 at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) 22:40:38 at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) 22:40:38 at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) 22:40:38 at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) 22:40:38 at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) 22:40:38 at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) 22:40:38 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:38 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:38 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:38 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:38 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:38 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:38 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:38 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:38 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) 22:40:38 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) 22:40:38 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:38 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:38 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:38 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:38 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:38 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:38 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:38 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:38 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) 22:40:38 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:38 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) 22:40:38 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:38 at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) 22:40:38 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) 22:40:38 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) 22:40:38 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) 22:40:38 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) 22:40:38 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:38 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:38 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:38 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:38 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:38 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:38 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:38 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:38 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:38 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:38 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:38 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:38 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:38 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:38 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:38 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:38 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:38 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:38 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:38 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:38 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:38 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:38 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:38 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:38 22:40:38.834 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 22:40:38 22:40:38.835 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 22:40:38 22:40:38.834 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:38 java.net.ConnectException: Connection refused 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:38 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:38 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:38 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:38 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:38 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:38 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:38 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:38 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:38 22:40:38.836 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 22:40:38 22:40:38.836 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:38 22:40:38.836 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:38 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.155 s - in org.onap.sdc.utils.SdcKafkaTest 22:40:38 [INFO] Running org.onap.sdc.utils.NotificationSenderTest 22:40:39 22:40:39.021 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.023 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:39 22:40:39.024 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:39 22:40:39.024 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:39 22:40:39.025 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:39 22:40:39.025 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:39 22:40:39.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:39 java.net.ConnectException: Connection refused 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:39 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:39 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:39 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:39 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:39 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:39 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:39 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:39 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:39 22:40:39.028 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:39 22:40:39.028 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:39 22:40:39.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.233 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.233 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.236 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.278 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 22:40:39 22:40:39.279 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 22:40:39 22:40:39.279 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 22:40:39 to topic null 22:40:39 22:40:39.285 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:39 22:40:39.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:39 22:40:39.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:39 22:40:39.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:39 22:40:39.286 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:39 22:40:39.288 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:39 java.net.ConnectException: Connection refused 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:39 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:39 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:39 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:39 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:39 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:39 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:39 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:39 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:39 22:40:39.288 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:39 22:40:39.288 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:39 22:40:39.337 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.337 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.389 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.437 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:39 22:40:39.437 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:39 22:40:39.438 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:39 22:40:39.438 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:39 22:40:39.438 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:39 22:40:39.439 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.439 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:39 java.net.ConnectException: Connection refused 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:39 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:39 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:39 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:39 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:39 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:39 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:39 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:39 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:39 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:39 22:40:39.439 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:39 22:40:39.440 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:39 22:40:39.440 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.489 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.540 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.540 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.590 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.640 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.641 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.641 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.691 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.741 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.741 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.742 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.783 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 22:40:39 22:40:39.792 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.842 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.842 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.842 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.892 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.942 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:39 22:40:39.942 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:39 22:40:39.943 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:39 22:40:39.993 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.043 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.043 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.043 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.093 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.143 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.143 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.144 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:40 22:40:40.144 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:40 22:40:40.144 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:40 22:40:40.144 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:40 22:40:40.145 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:40 22:40:40.145 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:40 java.net.ConnectException: Connection refused 22:40:40 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:40 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:40 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:40 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:40 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:40 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:40 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:40 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:40 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:40 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:40 22:40:40.146 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:40 22:40:40.146 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:40 22:40:40.244 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.244 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.246 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.293 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 22:40:40 22:40:40.297 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.293 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 22:40:40 22:40:40.297 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 22:40:40 to topic null 22:40:40 22:40:40.344 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:40 22:40:40.344 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:40 22:40:40.344 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:40 22:40:40.345 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:40 22:40:40.345 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:40 22:40:40.346 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:40 java.net.ConnectException: Connection refused 22:40:40 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:40 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:40 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:40 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:40 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:40 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:40 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:40 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:40 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:40 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:40 22:40:40.346 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:40 22:40:40.346 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:40 22:40:40.346 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.347 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.397 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.447 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.447 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.448 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.498 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.547 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.547 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.548 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.599 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.648 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.648 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.649 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.699 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.748 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.748 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.750 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.800 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.848 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.849 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.850 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.901 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:40 22:40:40.949 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:40 22:40:40.949 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:40 22:40:40.951 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.001 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.049 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.050 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.052 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.102 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.150 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.150 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.152 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.203 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.250 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:41 22:40:41.251 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:41 22:40:41.251 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:41 22:40:41.251 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:41 22:40:41.251 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:41 22:40:41.253 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:41 java.net.ConnectException: Connection refused 22:40:41 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:41 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:41 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:41 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:41 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:41 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:41 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:41 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:41 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:41 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:41 22:40:41.253 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:41 22:40:41.253 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:41 22:40:41.253 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:41 22:40:41.253 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.253 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:41 22:40:41.253 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:41 22:40:41.254 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:41 22:40:41.254 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:41 22:40:41.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:41 java.net.ConnectException: Connection refused 22:40:41 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:41 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:41 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:41 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:41 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:41 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:41 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:41 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:41 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:41 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:41 22:40:41.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:41 22:40:41.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:41 22:40:41.298 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. 22:40:41 org.apache.kafka.common.KafkaException: null 22:40:41 22:40:41.324 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 22:40:41 22:40:41.325 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 22:40:41 22:40:41.325 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 22:40:41 to topic null 22:40:41 22:40:41.325 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status 22:40:41 org.apache.kafka.common.KafkaException: null 22:40:41 at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) 22:40:41 at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:47) 22:40:41 at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:83) 22:40:41 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:41 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:41 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:41 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:41 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:41 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:41 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:41 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:41 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:41 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:41 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:41 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:41 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:41 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:41 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:41 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:41 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:41 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:41 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:41 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:41 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:41 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:41 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:41 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:41 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:41 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:41 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:41 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:41 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:41 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:41 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:41 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:41 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:41 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:41 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:41 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:41 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:41 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:41 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:41 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:41 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:41 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:41 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:41 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:41 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:41 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:41 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.488 s - in org.onap.sdc.utils.NotificationSenderTest 22:40:41 [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest 22:40:41 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 s - in org.onap.sdc.utils.KafkaCommonConfigTest 22:40:41 [INFO] Running org.onap.sdc.utils.GeneralUtilsTest 22:40:41 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.utils.GeneralUtilsTest 22:40:41 [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 22:40:41 22:40:41.353 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.354 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.356 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.526 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.525 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.527 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.715 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.716 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.766 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.970 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:41 22:40:41.971 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:41 22:40:41.972 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:41 22:40:41.992 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:41 22:40:41.992 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:41 22:40:41.998 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.022 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.072 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.072 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.073 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.094 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.123 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.172 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.173 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.173 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.194 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.224 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:42 22:40:42.224 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:42 22:40:42.224 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:42 22:40:42.224 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:42 22:40:42.225 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:42 22:40:42.227 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:42 java.net.ConnectException: Connection refused 22:40:42 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:42 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:42 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:42 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:42 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:42 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:42 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:42 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:42 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:42 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:42 22:40:42.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:42 22:40:42.231 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:42 22:40:42.273 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.273 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.294 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.332 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.373 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.373 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.382 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.394 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.433 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:42 22:40:42.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:42 22:40:42.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:42 22:40:42.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:42 22:40:42.474 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:42 22:40:42.475 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:42 java.net.ConnectException: Connection refused 22:40:42 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:42 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:42 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:42 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:42 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:42 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:42 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:42 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:42 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:42 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:42 22:40:42.476 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:42 22:40:42.476 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:42 22:40:42.476 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.484 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.494 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.534 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.576 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.576 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.584 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.594 [pool-8-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.635 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.676 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.677 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.685 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.694 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.735 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.777 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.777 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.786 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.794 [pool-8-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.836 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.877 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.877 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.886 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.894 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:42 22:40:42.936 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.977 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:42 22:40:42.978 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:42 22:40:42.987 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:42 22:40:42.994 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.005 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:43 22:40:43.005 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:43 22:40:43.006 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.037 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.078 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.078 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.087 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.105 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.138 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.178 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.178 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.188 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:43 22:40:43.188 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:43 22:40:43.188 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:43 22:40:43.189 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:43 22:40:43.189 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:43 22:40:43.190 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:43 java.net.ConnectException: Connection refused 22:40:43 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:43 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:43 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:43 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:43 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:43 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:43 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:43 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:43 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:43 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:43 22:40:43.190 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:43 22:40:43.190 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:43 22:40:43.205 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.206 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:43 22:40:43.206 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:43 "serviceName" : "Testnotificationser1", 22:40:43 "serviceVersion" : "1.0", 22:40:43 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:43 "serviceDescription" : "TestNotificationVF1", 22:40:43 "bugabuga" : "xyz", 22:40:43 "resources" : [{ 22:40:43 "resourceInstanceName" : "testnotificationvf11", 22:40:43 "resourceName" : "TestNotificationVF1", 22:40:43 "resourceVersion" : "1.0", 22:40:43 "resoucreType" : "VF", 22:40:43 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:43 "artifacts" : [{ 22:40:43 "artifactName" : "heat.yaml", 22:40:43 "artifactType" : "HEAT", 22:40:43 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:43 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:43 "artifactDescription" : "heat", 22:40:43 "artifactTimeout" : 60, 22:40:43 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:43 "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:43 "artifactVersion" : "1" 22:40:43 }, { 22:40:43 "artifactName" : "buga.bug", 22:40:43 "artifactType" : "BUGA_BUGA", 22:40:43 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:43 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:43 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:43 "artifactTimeout" : 0, 22:40:43 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:43 "artifactVersion" : "1", 22:40:43 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:43 } 22:40:43 ] 22:40:43 } 22:40:43 ]} 22:40:43 22:40:43.221 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 22:40:43 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:43 "serviceName": "Testnotificationser1", 22:40:43 "serviceVersion": "1.0", 22:40:43 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:43 "serviceDescription": "TestNotificationVF1", 22:40:43 "resources": [ 22:40:43 { 22:40:43 "resourceInstanceName": "testnotificationvf11", 22:40:43 "resourceName": "TestNotificationVF1", 22:40:43 "resourceVersion": "1.0", 22:40:43 "resoucreType": "VF", 22:40:43 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:43 "artifacts": [ 22:40:43 { 22:40:43 "artifactName": "heat.yaml", 22:40:43 "artifactType": "HEAT", 22:40:43 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:43 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:43 "artifactDescription": "heat", 22:40:43 "artifactTimeout": 60, 22:40:43 "artifactVersion": "1", 22:40:43 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:43 "relatedArtifactsInfo": [] 22:40:43 } 22:40:43 ] 22:40:43 } 22:40:43 ], 22:40:43 "serviceArtifacts": [] 22:40:43 } 22:40:43 22:40:43.279 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.279 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.291 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.305 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.341 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.379 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.379 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.391 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.406 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.442 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.479 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.480 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.492 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.506 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.542 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.580 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:43 22:40:43.580 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:43 22:40:43.580 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:43 22:40:43.580 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:43 22:40:43.581 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:43 22:40:43.582 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:43 java.net.ConnectException: Connection refused 22:40:43 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:43 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:43 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:43 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:43 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:43 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:43 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:43 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:43 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:43 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:43 22:40:43.582 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:43 22:40:43.582 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:43 22:40:43.582 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.593 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.606 [pool-9-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.643 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.682 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.683 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.693 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.705 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.743 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.783 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.794 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.806 [pool-9-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.844 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.883 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.883 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.894 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.905 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:43 22:40:43.945 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:43 22:40:43.984 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:43 22:40:43.984 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:43 22:40:43.995 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.005 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.014 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:44 22:40:44.014 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:44 22:40:44.021 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:44 22:40:44.021 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:44 22:40:44.023 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.045 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.084 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.084 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.095 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.121 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.146 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.185 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.185 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.196 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:44 22:40:44.196 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:44 22:40:44.196 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:44 22:40:44.196 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:44 22:40:44.197 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:44 22:40:44.197 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:44 java.net.ConnectException: Connection refused 22:40:44 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:44 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:44 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:44 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:44 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:44 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:44 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:44 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:44 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:44 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:44 22:40:44.198 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:44 22:40:44.198 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:44 22:40:44.222 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.223 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:44 22:40:44.223 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:44 "serviceName" : "Testnotificationser1", 22:40:44 "serviceVersion" : "1.0", 22:40:44 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:44 "serviceDescription" : "TestNotificationVF1", 22:40:44 "resources" : [{ 22:40:44 "resourceInstanceName" : "testnotificationvf11", 22:40:44 "resourceName" : "TestNotificationVF1", 22:40:44 "resourceVersion" : "1.0", 22:40:44 "resoucreType" : "VF", 22:40:44 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:44 "artifacts" : [{ 22:40:44 "artifactName" : "sample-xml-alldata-1-1.xml", 22:40:44 "artifactType" : "YANG_XML", 22:40:44 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:44 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:44 "artifactDescription" : "MyYang", 22:40:44 "artifactTimeout" : 0, 22:40:44 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:44 "artifactVersion" : "1", 22:40:44 "relatedArtifacts" : [ 22:40:44 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 22:40:44 ] }, { 22:40:44 "artifactName" : "heat.yaml", 22:40:44 "artifactType" : "HEAT", 22:40:44 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:44 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:44 "artifactDescription" : "heat", 22:40:44 "artifactTimeout" : 60, 22:40:44 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:44 "artifactVersion" : "1", 22:40:44 "relatedArtifacts" : [ 22:40:44 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:44 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 22:40:44 ] }, { 22:40:44 "artifactName" : "heat.env", 22:40:44 "artifactType" : "HEAT_ENV", 22:40:44 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout" : 0, 22:40:44 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "artifactVersion" : "1", 22:40:44 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:44 } 22:40:44 ] 22:40:44 } 22:40:44 ]} 22:40:44 22:40:44.232 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 22:40:44 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:44 "serviceName": "Testnotificationser1", 22:40:44 "serviceVersion": "1.0", 22:40:44 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:44 "serviceDescription": "TestNotificationVF1", 22:40:44 "resources": [ 22:40:44 { 22:40:44 "resourceInstanceName": "testnotificationvf11", 22:40:44 "resourceName": "TestNotificationVF1", 22:40:44 "resourceVersion": "1.0", 22:40:44 "resoucreType": "VF", 22:40:44 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:44 "artifacts": [ 22:40:44 { 22:40:44 "artifactName": "sample-xml-alldata-1-1.xml", 22:40:44 "artifactType": "YANG_XML", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:44 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:44 "artifactDescription": "MyYang", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:44 "relatedArtifacts": [ 22:40:44 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 22:40:44 ], 22:40:44 "relatedArtifactsInfo": [ 22:40:44 { 22:40:44 "artifactName": "heat.env", 22:40:44 "artifactType": "HEAT_ENV", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:44 } 22:40:44 ] 22:40:44 }, 22:40:44 { 22:40:44 "artifactName": "heat.yaml", 22:40:44 "artifactType": "HEAT", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:44 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:44 "artifactDescription": "heat", 22:40:44 "artifactTimeout": 60, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:44 "generatedArtifact": { 22:40:44 "artifactName": "heat.env", 22:40:44 "artifactType": "HEAT_ENV", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:44 }, 22:40:44 "relatedArtifacts": [ 22:40:44 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:44 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 22:40:44 ], 22:40:44 "relatedArtifactsInfo": [ 22:40:44 { 22:40:44 "artifactName": "sample-xml-alldata-1-1.xml", 22:40:44 "artifactType": "YANG_XML", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:44 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:44 "artifactDescription": "MyYang", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:44 "relatedArtifacts": [ 22:40:44 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 22:40:44 ], 22:40:44 "relatedArtifactsInfo": [ 22:40:44 { 22:40:44 "artifactName": "heat.env", 22:40:44 "artifactType": "HEAT_ENV", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:44 } 22:40:44 ] 22:40:44 }, 22:40:44 { 22:40:44 "artifactName": "heat.env", 22:40:44 "artifactType": "HEAT_ENV", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:44 } 22:40:44 ] 22:40:44 }, 22:40:44 { 22:40:44 "artifactName": "heat.env", 22:40:44 "artifactType": "HEAT_ENV", 22:40:44 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:44 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:44 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:44 "artifactTimeout": 0, 22:40:44 "artifactVersion": "1", 22:40:44 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:44 "relatedArtifactsInfo": [] 22:40:44 } 22:40:44 ] 22:40:44 } 22:40:44 ], 22:40:44 "serviceArtifacts": [] 22:40:44 } 22:40:44 22:40:44.285 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.285 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.298 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.321 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.349 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.399 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.422 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.449 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.487 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.487 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.499 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.522 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.550 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.587 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.587 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.600 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.621 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.650 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.687 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:44 22:40:44.688 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:44 22:40:44.688 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:44 22:40:44.688 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:44 22:40:44.688 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:44 22:40:44.689 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:44 java.net.ConnectException: Connection refused 22:40:44 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:44 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:44 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:44 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:44 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:44 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:44 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:44 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:44 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:44 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:44 22:40:44.689 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:44 22:40:44.689 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:44 22:40:44.689 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.700 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.721 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.751 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.790 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.790 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.801 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.822 [pool-10-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.851 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.890 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.891 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:44 22:40:44.902 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.922 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:44 22:40:44.952 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:44 22:40:44.991 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:44 22:40:44.991 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.002 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.021 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.032 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:45 22:40:45.032 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:45 22:40:45.038 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:45 22:40:45.038 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:45 22:40:45.051 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.052 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:45 java.net.ConnectException: Connection refused 22:40:45 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:45 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:45 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:45 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:45 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:45 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:45 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:45 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:45 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:45 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:45 22:40:45.053 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:45 22:40:45.091 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.091 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.139 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.154 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.191 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.192 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.204 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.239 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.240 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:45 22:40:45.240 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:45 "serviceName" : "Testnotificationser1", 22:40:45 "serviceVersion" : "1.0", 22:40:45 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:45 "serviceDescription" : "TestNotificationVF1", 22:40:45 "resources" : [{ 22:40:45 "resourceInstanceName" : "testnotificationvf11", 22:40:45 "resourceName" : "TestNotificationVF1", 22:40:45 "resourceVersion" : "1.0", 22:40:45 "resoucreType" : "VF", 22:40:45 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:45 "artifacts" : [{ 22:40:45 "artifactName" : "sample-xml-alldata-1-1.xml", 22:40:45 "artifactType" : "YANG_XML", 22:40:45 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:45 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:45 "artifactDescription" : "MyYang", 22:40:45 "artifactTimeout" : 0, 22:40:45 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:45 "artifactVersion" : "1" 22:40:45 }, { 22:40:45 "artifactName" : "heat.yaml", 22:40:45 "artifactType" : "HEAT", 22:40:45 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:45 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:45 "artifactDescription" : "heat", 22:40:45 "artifactTimeout" : 60, 22:40:45 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:45 "artifactVersion" : "1" 22:40:45 }, { 22:40:45 "artifactName" : "heat.env", 22:40:45 "artifactType" : "HEAT_ENV", 22:40:45 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:45 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:45 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:45 "artifactTimeout" : 0, 22:40:45 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:45 "artifactVersion" : "1", 22:40:45 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:45 } 22:40:45 ] 22:40:45 } 22:40:45 ]} 22:40:45 22:40:45.245 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 22:40:45 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:45 "serviceName": "Testnotificationser1", 22:40:45 "serviceVersion": "1.0", 22:40:45 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:45 "serviceDescription": "TestNotificationVF1", 22:40:45 "resources": [ 22:40:45 { 22:40:45 "resourceInstanceName": "testnotificationvf11", 22:40:45 "resourceName": "TestNotificationVF1", 22:40:45 "resourceVersion": "1.0", 22:40:45 "resoucreType": "VF", 22:40:45 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:45 "artifacts": [ 22:40:45 { 22:40:45 "artifactName": "heat.yaml", 22:40:45 "artifactType": "HEAT", 22:40:45 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:45 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:45 "artifactDescription": "heat", 22:40:45 "artifactTimeout": 60, 22:40:45 "artifactVersion": "1", 22:40:45 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:45 "generatedArtifact": { 22:40:45 "artifactName": "heat.env", 22:40:45 "artifactType": "HEAT_ENV", 22:40:45 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:45 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:45 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:45 "artifactTimeout": 0, 22:40:45 "artifactVersion": "1", 22:40:45 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:45 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:45 }, 22:40:45 "relatedArtifactsInfo": [] 22:40:45 } 22:40:45 ] 22:40:45 } 22:40:45 ], 22:40:45 "serviceArtifacts": [] 22:40:45 } 22:40:45 22:40:45.254 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.292 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.292 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.305 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.339 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.355 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.392 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.393 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.405 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.439 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.455 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.493 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.493 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.506 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.539 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.556 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.593 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.593 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.606 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.639 [pool-11-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.657 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.694 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:45 22:40:45.694 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:45 22:40:45.694 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:45 22:40:45.694 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:45 22:40:45.694 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:45 22:40:45.695 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:45 java.net.ConnectException: Connection refused 22:40:45 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:45 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:45 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:45 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:45 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:45 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:45 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:45 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:45 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:45 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:45 22:40:45.695 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:45 22:40:45.695 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:45 22:40:45.695 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.707 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.739 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.757 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.808 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.839 [pool-11-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.858 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.896 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.896 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:45 22:40:45.909 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.939 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:45 22:40:45.959 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:45 22:40:45.997 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:45 22:40:45.997 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.009 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.039 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.048 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:46 22:40:46.048 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:46 22:40:46.051 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.059 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.097 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.097 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.110 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.149 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.160 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.197 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.197 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.210 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:46 22:40:46.210 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:46 22:40:46.210 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:46 22:40:46.211 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:46 22:40:46.211 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:46 22:40:46.212 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:46 java.net.ConnectException: Connection refused 22:40:46 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:46 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:46 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:46 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:46 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:46 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:46 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:46 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:46 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:46 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:46 22:40:46.212 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:46 22:40:46.212 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:46 22:40:46.250 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.250 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:46 22:40:46.250 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 22:40:46 22:40:46.254 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 22:40:46 22:40:46.254 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 22:40:46 java.lang.NullPointerException: null 22:40:46 at org.onap.sdc.impl.NotificationConsumer.buildResourceInstancesLogic(NotificationConsumer.java:98) 22:40:46 at org.onap.sdc.impl.NotificationConsumer.buildCallbackNotificationLogic(NotificationConsumer.java:87) 22:40:46 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:65) 22:40:46 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 22:40:46 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 22:40:46 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 22:40:46 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 22:40:46 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 22:40:46 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:46 22:40:46.298 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.298 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.312 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.349 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.362 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.398 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.398 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.413 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.450 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.463 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.498 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.498 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.513 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.549 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.564 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.598 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.599 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.614 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.649 [pool-12-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.664 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.699 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.699 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.715 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.749 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.765 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:46 22:40:46.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:46 22:40:46.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:46 22:40:46.800 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:46 22:40:46.800 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:46 22:40:46.801 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:46 java.net.ConnectException: Connection refused 22:40:46 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:46 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:46 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:46 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:46 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:46 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:46 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:46 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:46 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:46 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:46 22:40:46.801 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:46 22:40:46.801 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:46 22:40:46.801 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.815 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.850 [pool-12-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.865 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.901 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:46 22:40:46.902 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:46 22:40:46.916 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:46 22:40:46.950 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:46 22:40:46.966 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.002 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.002 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.016 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.050 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.055 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:47 22:40:47.055 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:47 22:40:47.058 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.066 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.102 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.103 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.117 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.156 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.167 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.203 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.203 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.217 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:47 22:40:47.217 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:47 22:40:47.217 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:47 22:40:47.218 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:47 22:40:47.218 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:47 22:40:47.218 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:47 java.net.ConnectException: Connection refused 22:40:47 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:47 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:47 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:47 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:47 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:47 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:47 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:47 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:47 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:47 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:47 22:40:47.219 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:47 22:40:47.219 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:47 22:40:47.257 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.257 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:47 22:40:47.257 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:47 "serviceName" : "Testnotificationser1", 22:40:47 "serviceVersion" : "1.0", 22:40:47 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:47 "serviceDescription" : "TestNotificationVF1", 22:40:47 "resources" : [{ 22:40:47 "resourceInstanceName" : "testnotificationvf11", 22:40:47 "resourceName" : "TestNotificationVF1", 22:40:47 "resourceVersion" : "1.0", 22:40:47 "resoucreType" : "VF", 22:40:47 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:47 "artifacts" : [{ 22:40:47 "artifactName" : "sample-xml-alldata-1-1.xml", 22:40:47 "artifactType" : "YANG_XML", 22:40:47 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:47 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:47 "artifactDescription" : "MyYang", 22:40:47 "artifactTimeout" : 0, 22:40:47 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:47 "artifactVersion" : "1" 22:40:47 }, { 22:40:47 "artifactName" : "heat.yaml", 22:40:47 "artifactType" : "HEAT", 22:40:47 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:47 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:47 "artifactDescription" : "heat", 22:40:47 "artifactTimeout" : 60, 22:40:47 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:47 "artifactVersion" : "1" 22:40:47 }, { 22:40:47 "artifactName" : "heat.env", 22:40:47 "artifactType" : "HEAT_ENV", 22:40:47 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:47 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:47 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:47 "artifactTimeout" : 0, 22:40:47 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:47 "artifactVersion" : "1", 22:40:47 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:47 } 22:40:47 ] 22:40:47 } 22:40:47 ]} 22:40:47 22:40:47.261 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 22:40:47 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:47 "serviceName": "Testnotificationser1", 22:40:47 "serviceVersion": "1.0", 22:40:47 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:47 "serviceDescription": "TestNotificationVF1", 22:40:47 "resources": [ 22:40:47 { 22:40:47 "resourceInstanceName": "testnotificationvf11", 22:40:47 "resourceName": "TestNotificationVF1", 22:40:47 "resourceVersion": "1.0", 22:40:47 "resoucreType": "VF", 22:40:47 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:47 "artifacts": [ 22:40:47 { 22:40:47 "artifactName": "heat.yaml", 22:40:47 "artifactType": "HEAT", 22:40:47 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:47 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:47 "artifactDescription": "heat", 22:40:47 "artifactTimeout": 60, 22:40:47 "artifactVersion": "1", 22:40:47 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:47 "generatedArtifact": { 22:40:47 "artifactName": "heat.env", 22:40:47 "artifactType": "HEAT_ENV", 22:40:47 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:47 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:47 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:47 "artifactTimeout": 0, 22:40:47 "artifactVersion": "1", 22:40:47 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:47 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:47 }, 22:40:47 "relatedArtifactsInfo": [] 22:40:47 } 22:40:47 ] 22:40:47 } 22:40:47 ], 22:40:47 "serviceArtifacts": [] 22:40:47 } 22:40:47 22:40:47.303 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.303 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.319 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.356 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.369 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.404 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.404 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.420 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.457 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.470 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.504 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.504 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.520 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.557 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.570 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.604 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.604 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.621 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.657 [pool-13-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.671 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.705 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.705 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.721 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.756 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.772 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.805 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:47 22:40:47.805 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.822 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.856 [pool-13-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.872 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:47 22:40:47.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:47 22:40:47.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:47 22:40:47.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:47 22:40:47.906 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:47 22:40:47.907 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:47 java.net.ConnectException: Connection refused 22:40:47 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:47 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:47 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:47 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:47 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:47 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:47 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:47 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:47 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:47 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:47 22:40:47.908 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:47 22:40:47.908 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:47 22:40:47.908 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:47 22:40:47.922 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:47 22:40:47.956 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:47 22:40:47.973 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.008 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.009 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.023 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.056 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.063 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 22:40:48 22:40:48.063 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:48 22:40:48.065 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.073 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.109 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.109 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.124 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.164 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.174 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.209 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.209 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.224 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.265 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.266 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 22:40:48 22:40:48.266 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { 22:40:48 "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:48 "serviceName" : "Testnotificationser1", 22:40:48 "serviceVersion" : "1.0", 22:40:48 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:48 "serviceDescription" : "TestNotificationVF1", 22:40:48 "serviceArtifacts" : [{ 22:40:48 "artifactName" : "sample-xml-alldata-1-1.xml", 22:40:48 "artifactType" : "YANG_XML", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:48 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:48 "artifactDescription" : "MyYang", 22:40:48 "artifactTimeout" : 0, 22:40:48 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:48 "artifactVersion" : "1" 22:40:48 }, { 22:40:48 "artifactName" : "heat.yaml", 22:40:48 "artifactType" : "HEAT", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:48 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:48 "artifactDescription" : "heat", 22:40:48 "artifactTimeout" : 60, 22:40:48 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:48 "artifactVersion" : "1" 22:40:48 }, { 22:40:48 "artifactName" : "heat.env", 22:40:48 "artifactType" : "HEAT_ENV", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:48 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:48 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:48 "artifactTimeout" : 0, 22:40:48 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:48 "artifactVersion" : "1", 22:40:48 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:48 } 22:40:48 ], 22:40:48 "resources" : [{ 22:40:48 "resourceInstanceName" : "testnotificationvf11", 22:40:48 "resourceName" : "TestNotificationVF1", 22:40:48 "resourceVersion" : "1.0", 22:40:48 "resoucreType" : "VF", 22:40:48 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:48 "artifacts" : [{ 22:40:48 "artifactName" : "sample-xml-alldata-1-1.xml", 22:40:48 "artifactType" : "YANG_XML", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 22:40:48 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 22:40:48 "artifactDescription" : "MyYang", 22:40:48 "artifactTimeout" : 0, 22:40:48 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 22:40:48 "artifactVersion" : "1" 22:40:48 }, { 22:40:48 "artifactName" : "heat.yaml", 22:40:48 "artifactType" : "HEAT", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:48 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:48 "artifactDescription" : "heat", 22:40:48 "artifactTimeout" : 60, 22:40:48 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:48 "artifactVersion" : "1" 22:40:48 }, { 22:40:48 "artifactName" : "heat.env", 22:40:48 "artifactType" : "HEAT_ENV", 22:40:48 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:48 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:48 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 22:40:48 "artifactTimeout" : 0, 22:40:48 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:48 "artifactVersion" : "1", 22:40:48 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:48 } 22:40:48 ] 22:40:48 } 22:40:48 ] 22:40:48 } 22:40:48 22:40:48.275 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.277 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 22:40:48 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 22:40:48 "serviceName": "Testnotificationser1", 22:40:48 "serviceVersion": "1.0", 22:40:48 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 22:40:48 "serviceDescription": "TestNotificationVF1", 22:40:48 "resources": [ 22:40:48 { 22:40:48 "resourceInstanceName": "testnotificationvf11", 22:40:48 "resourceName": "TestNotificationVF1", 22:40:48 "resourceVersion": "1.0", 22:40:48 "resoucreType": "VF", 22:40:48 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 22:40:48 "artifacts": [ 22:40:48 { 22:40:48 "artifactName": "heat.yaml", 22:40:48 "artifactType": "HEAT", 22:40:48 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:48 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:48 "artifactDescription": "heat", 22:40:48 "artifactTimeout": 60, 22:40:48 "artifactVersion": "1", 22:40:48 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:48 "generatedArtifact": { 22:40:48 "artifactName": "heat.env", 22:40:48 "artifactType": "HEAT_ENV", 22:40:48 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:48 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:48 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:48 "artifactTimeout": 0, 22:40:48 "artifactVersion": "1", 22:40:48 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:48 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:48 }, 22:40:48 "relatedArtifactsInfo": [] 22:40:48 } 22:40:48 ] 22:40:48 } 22:40:48 ], 22:40:48 "serviceArtifacts": [ 22:40:48 { 22:40:48 "artifactName": "heat.yaml", 22:40:48 "artifactType": "HEAT", 22:40:48 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 22:40:48 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 22:40:48 "artifactDescription": "heat", 22:40:48 "artifactTimeout": 60, 22:40:48 "artifactVersion": "1", 22:40:48 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 22:40:48 "generatedArtifact": { 22:40:48 "artifactName": "heat.env", 22:40:48 "artifactType": "HEAT_ENV", 22:40:48 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 22:40:48 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 22:40:48 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 22:40:48 "artifactTimeout": 0, 22:40:48 "artifactVersion": "1", 22:40:48 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 22:40:48 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 22:40:48 } 22:40:48 } 22:40:48 ] 22:40:48 } 22:40:48 22:40:48.310 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.310 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.326 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:48 22:40:48.326 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:48 22:40:48.326 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:48 22:40:48.326 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:48 22:40:48.327 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:48 22:40:48.328 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:48 java.net.ConnectException: Connection refused 22:40:48 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:48 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:48 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:48 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:48 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:48 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:48 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:48 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:48 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:48 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:48 22:40:48.329 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:48 22:40:48.329 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:48 22:40:48.364 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.410 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.410 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.429 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.464 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.480 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.510 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.510 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.530 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.564 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.580 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.611 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.611 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.631 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.664 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.681 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.731 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.764 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.781 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.812 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:48 22:40:48.812 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.832 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.865 [pool-14-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.882 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:48 22:40:48.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:48 22:40:48.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:48 22:40:48.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:48 22:40:48.912 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:48 22:40:48.913 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:48 java.net.ConnectException: Connection refused 22:40:48 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:48 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:48 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:48 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:48 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:48 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:48 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:48 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 22:40:48 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 22:40:48 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 22:40:48 22:40:48.913 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Node 1 disconnected. 22:40:48 22:40:48.913 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:48 22:40:48.914 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:48 22:40:48.932 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:48 22:40:48.964 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:48 22:40:48.982 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.014 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.014 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.033 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.064 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 22:40:49 [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.726 s - in org.onap.sdc.impl.NotificationConsumerTest 22:40:49 [INFO] Running org.onap.sdc.impl.HeatParserTest 22:40:49 22:40:49.073 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 22:40:49 22:40:49.083 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.114 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.114 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.154 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.176 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. 22:40:49 org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 22:40:49 in 'string', line 1, column 1: 22:40:49 just text 22:40:49 ^ 22:40:49 22:40:49 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) 22:40:49 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) 22:40:49 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) 22:40:49 at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) 22:40:49 at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) 22:40:49 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) 22:40:49 at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) 22:40:49 at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) 22:40:49 at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) 22:40:49 at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:49 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:49 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:49 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:49 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:49 Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 22:40:49 at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) 22:40:49 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) 22:40:49 ... 76 common frames omitted 22:40:49 22:40:49.176 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 22:40:49 22:40:49.176 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 22:40:49 22:40:49.196 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 22:40:49 22:40:49 description: Simple template to deploy a stack with two virtual machine instances 22:40:49 22:40:49 parameters: 22:40:49 image_name_1: 22:40:49 type: string 22:40:49 label: Image Name 22:40:49 description: SCOIMAGE Specify an image name for instance1 22:40:49 default: cirros-0.3.1-x86_64 22:40:49 image_name_2: 22:40:49 type: string 22:40:49 label: Image Name 22:40:49 description: SCOIMAGE Specify an image name for instance2 22:40:49 default: cirros-0.3.1-x86_64 22:40:49 network_id: 22:40:49 type: string 22:40:49 label: Network ID 22:40:49 description: SCONETWORK Network to be used for the compute instance 22:40:49 hidden: true 22:40:49 constraints: 22:40:49 - length: { min: 6, max: 8 } 22:40:49 description: Password length must be between 6 and 8 characters. 22:40:49 - range: { min: 6, max: 8 } 22:40:49 description: Range description 22:40:49 - allowed_values: 22:40:49 - m1.small 22:40:49 - m1.medium 22:40:49 - m1.large 22:40:49 description: Allowed values description 22:40:49 - allowed_pattern: "[a-zA-Z0-9]+" 22:40:49 description: Password must consist of characters and numbers only. 22:40:49 - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" 22:40:49 description: Password must start with an uppercase character. 22:40:49 - custom_constraint: nova.keypair 22:40:49 description: Custom description 22:40:49 22:40:49 resources: 22:40:49 my_instance1: 22:40:49 type: OS::Nova::Server 22:40:49 properties: 22:40:49 image: { get_param: image_name_1 } 22:40:49 flavor: m1.small 22:40:49 networks: 22:40:49 - network : { get_param : network_id } 22:40:49 my_instance2: 22:40:49 type: OS::Nova::Server 22:40:49 properties: 22:40:49 image: { get_param: image_name_2 } 22:40:49 flavor: m1.tiny 22:40:49 networks: 22:40:49 - network : { get_param : network_id } 22:40:49 22:40:49.205 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.214 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.215 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 22:40:49 22:40:49.242 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 22:40:49 22:40:49.244 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 22:40:49 22:40:49 description: Simple template to deploy a stack with two virtual machine instances 22:40:49 22:40:49.245 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 22:40:49 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 s - in org.onap.sdc.impl.HeatParserTest 22:40:49 [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest 22:40:49 22:40:49.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initialize connection to node localhost:43439 (id: 1 rack: null) for sending metadata request 22:40:49 22:40:49.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.255 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Initiating connection to node localhost:43439 (id: 1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.256 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.256 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.256 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection with localhost/127.0.0.1 (channelId=1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.256 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Node 1 disconnected. 22:40:49 22:40:49.256 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Connection to node 1 (localhost/127.0.0.1:43439) could not be established. Broker may not be available. 22:40:49 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest 22:40:49 [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest 22:40:49 [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest 22:40:49 [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest 22:40:49 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.impl.ConfigurationValidatorTest 22:40:49 [INFO] Running org.onap.sdc.impl.DistributionClientTest 22:40:49 22:40:49.287 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.290 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 22:40:49 22:40:49.290 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 22:40:49 22:40:49.291 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@24a5db6d 22:40:49 22:40:49.292 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 22:40:49 acks = -1 22:40:49 batch.size = 16384 22:40:49 bootstrap.servers = [localhost:9092] 22:40:49 buffer.memory = 33554432 22:40:49 client.dns.lookup = use_all_dns_ips 22:40:49 client.id = mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1 22:40:49 compression.type = none 22:40:49 connections.max.idle.ms = 540000 22:40:49 delivery.timeout.ms = 120000 22:40:49 enable.idempotence = true 22:40:49 interceptor.classes = [] 22:40:49 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 linger.ms = 0 22:40:49 max.block.ms = 60000 22:40:49 max.in.flight.requests.per.connection = 5 22:40:49 max.request.size = 1048576 22:40:49 metadata.max.age.ms = 300000 22:40:49 metadata.max.idle.ms = 300000 22:40:49 metric.reporters = [] 22:40:49 metrics.num.samples = 2 22:40:49 metrics.recording.level = INFO 22:40:49 metrics.sample.window.ms = 30000 22:40:49 partitioner.adaptive.partitioning.enable = true 22:40:49 partitioner.availability.timeout.ms = 0 22:40:49 partitioner.class = null 22:40:49 partitioner.ignore.keys = false 22:40:49 receive.buffer.bytes = 32768 22:40:49 reconnect.backoff.max.ms = 1000 22:40:49 reconnect.backoff.ms = 50 22:40:49 request.timeout.ms = 30000 22:40:49 retries = 2147483647 22:40:49 retry.backoff.ms = 100 22:40:49 sasl.client.callback.handler.class = null 22:40:49 sasl.jaas.config = [hidden] 22:40:49 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:49 sasl.kerberos.min.time.before.relogin = 60000 22:40:49 sasl.kerberos.service.name = null 22:40:49 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:49 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:49 sasl.login.callback.handler.class = null 22:40:49 sasl.login.class = null 22:40:49 sasl.login.connect.timeout.ms = null 22:40:49 sasl.login.read.timeout.ms = null 22:40:49 sasl.login.refresh.buffer.seconds = 300 22:40:49 sasl.login.refresh.min.period.seconds = 60 22:40:49 sasl.login.refresh.window.factor = 0.8 22:40:49 sasl.login.refresh.window.jitter = 0.05 22:40:49 sasl.login.retry.backoff.max.ms = 10000 22:40:49 sasl.login.retry.backoff.ms = 100 22:40:49 sasl.mechanism = PLAIN 22:40:49 sasl.oauthbearer.clock.skew.seconds = 30 22:40:49 sasl.oauthbearer.expected.audience = null 22:40:49 sasl.oauthbearer.expected.issuer = null 22:40:49 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:49 sasl.oauthbearer.jwks.endpoint.url = null 22:40:49 sasl.oauthbearer.scope.claim.name = scope 22:40:49 sasl.oauthbearer.sub.claim.name = sub 22:40:49 sasl.oauthbearer.token.endpoint.url = null 22:40:49 security.protocol = SASL_PLAINTEXT 22:40:49 security.providers = null 22:40:49 send.buffer.bytes = 131072 22:40:49 socket.connection.setup.timeout.max.ms = 30000 22:40:49 socket.connection.setup.timeout.ms = 10000 22:40:49 ssl.cipher.suites = null 22:40:49 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:49 ssl.endpoint.identification.algorithm = https 22:40:49 ssl.engine.factory.class = null 22:40:49 ssl.key.password = null 22:40:49 ssl.keymanager.algorithm = SunX509 22:40:49 ssl.keystore.certificate.chain = null 22:40:49 ssl.keystore.key = null 22:40:49 ssl.keystore.location = null 22:40:49 ssl.keystore.password = null 22:40:49 ssl.keystore.type = JKS 22:40:49 ssl.protocol = TLSv1.3 22:40:49 ssl.provider = null 22:40:49 ssl.secure.random.implementation = null 22:40:49 ssl.trustmanager.algorithm = PKIX 22:40:49 ssl.truststore.certificates = null 22:40:49 ssl.truststore.location = null 22:40:49 ssl.truststore.password = null 22:40:49 ssl.truststore.type = JKS 22:40:49 transaction.timeout.ms = 60000 22:40:49 transactional.id = null 22:40:49 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 22:40:49 22:40:49.300 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Instantiated an idempotent producer. 22:40:49 22:40:49.304 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:49 22:40:49.304 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:49 22:40:49.305 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192049304 22:40:49 22:40:49.304 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Starting Kafka producer I/O thread. 22:40:49 22:40:49.305 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Transition from state UNINITIALIZED to INITIALIZING 22:40:49 22:40:49.305 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.307 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.307 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.307 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.307 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.308 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.308 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Kafka producer started 22:40:49 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 22:40:49 22:40:49.309 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.309 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 22:40:49 22:40:49.311 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.313 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.313 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 22:40:49 22:40:49.314 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.314 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 22:40:49 22:40:49.315 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.315 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.316 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.316 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Node -1 disconnected. 22:40:49 22:40:49.316 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.317 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.317 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.321 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.321 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 22:40:49 22:40:49.322 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.322 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 22:40:49 22:40:49.323 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.323 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 22:40:49 22:40:49.323 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.324 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 22:40:49 22:40:49.324 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.324 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 22:40:49 22:40:49.325 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.325 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 22:40:49 22:40:49.326 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.326 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 22:40:49 22:40:49.326 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.327 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 22:40:49 22:40:49.327 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.327 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 22:40:49 22:40:49.328 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.328 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 isUseHttpsWithSDC set to true 22:40:49 22:40:49.330 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.357 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.407 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.415 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.415 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.417 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.417 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.417 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.417 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.418 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.418 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.418 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.419 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Node -1 disconnected. 22:40:49 22:40:49.419 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.419 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.419 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.427 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= bb822e5d-53cf-4d2e-af1f-13d7f0e4a49f url= /sdc/v1/artifactTypes 22:40:49 22:40:49.427 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 22:40:49 22:40:49.457 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.486 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 22:40:49 java.net.UnknownHostException: badhost: System error 22:40:49 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 22:40:49 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 22:40:49 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 22:40:49 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 22:40:49 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 22:40:49 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 22:40:49 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 22:40:49 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 22:40:49 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 22:40:49 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 22:40:49 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 22:40:49 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 22:40:49 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:299) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:128) 22:40:49 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:49 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:49 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:49 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:49 at org.mockito.Answers.answer(Answers.java:99) 22:40:49 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:49 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:49 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:116) 22:40:49 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:49 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:49 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:49 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:49 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:49 22:40:49.499 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@901874b 22:40:49 22:40:49.499 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 22:40:49 22:40:49.499 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 22:40:49 22:40:49.500 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.508 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.515 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.515 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.519 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.520 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.520 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Node -1 disconnected. 22:40:49 22:40:49.520 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.520 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.520 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.537 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 736a921a-2d89-4004-8703-3b122e17a8c9 url= /sdc/v1/artifactTypes 22:40:49 22:40:49.538 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 22:40:49 22:40:49.545 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 22:40:49 org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 22:40:49 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 22:40:49 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 22:40:49 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 22:40:49 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 22:40:49 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 22:40:49 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 22:40:49 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:299) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:128) 22:40:49 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:49 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:49 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:49 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:49 at org.mockito.Answers.answer(Answers.java:99) 22:40:49 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:49 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:49 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:116) 22:40:49 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:49 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:49 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:49 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:49 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:49 Caused by: java.net.ConnectException: Connection refused (Connection refused) 22:40:49 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 22:40:49 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 22:40:49 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 22:40:49 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 22:40:49 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 22:40:49 at java.base/java.net.Socket.connect(Socket.java:609) 22:40:49 at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) 22:40:49 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 22:40:49 ... 98 common frames omitted 22:40:49 22:40:49.545 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@7884fbca 22:40:49 22:40:49.545 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 22:40:49 22:40:49.545 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 22:40:49 22:40:49.546 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.546 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.548 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.548 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 22:40:49 22:40:49.549 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 22:40:49 22:40:49.549 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@4f994c53 22:40:49 22:40:49.549 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 22:40:49 acks = -1 22:40:49 batch.size = 16384 22:40:49 bootstrap.servers = [localhost:9092] 22:40:49 buffer.memory = 33554432 22:40:49 client.dns.lookup = use_all_dns_ips 22:40:49 client.id = mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670 22:40:49 compression.type = none 22:40:49 connections.max.idle.ms = 540000 22:40:49 delivery.timeout.ms = 120000 22:40:49 enable.idempotence = true 22:40:49 interceptor.classes = [] 22:40:49 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 linger.ms = 0 22:40:49 max.block.ms = 60000 22:40:49 max.in.flight.requests.per.connection = 5 22:40:49 max.request.size = 1048576 22:40:49 metadata.max.age.ms = 300000 22:40:49 metadata.max.idle.ms = 300000 22:40:49 metric.reporters = [] 22:40:49 metrics.num.samples = 2 22:40:49 metrics.recording.level = INFO 22:40:49 metrics.sample.window.ms = 30000 22:40:49 partitioner.adaptive.partitioning.enable = true 22:40:49 partitioner.availability.timeout.ms = 0 22:40:49 partitioner.class = null 22:40:49 partitioner.ignore.keys = false 22:40:49 receive.buffer.bytes = 32768 22:40:49 reconnect.backoff.max.ms = 1000 22:40:49 reconnect.backoff.ms = 50 22:40:49 request.timeout.ms = 30000 22:40:49 retries = 2147483647 22:40:49 retry.backoff.ms = 100 22:40:49 sasl.client.callback.handler.class = null 22:40:49 sasl.jaas.config = [hidden] 22:40:49 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:49 sasl.kerberos.min.time.before.relogin = 60000 22:40:49 sasl.kerberos.service.name = null 22:40:49 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:49 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:49 sasl.login.callback.handler.class = null 22:40:49 sasl.login.class = null 22:40:49 sasl.login.connect.timeout.ms = null 22:40:49 sasl.login.read.timeout.ms = null 22:40:49 sasl.login.refresh.buffer.seconds = 300 22:40:49 sasl.login.refresh.min.period.seconds = 60 22:40:49 sasl.login.refresh.window.factor = 0.8 22:40:49 sasl.login.refresh.window.jitter = 0.05 22:40:49 sasl.login.retry.backoff.max.ms = 10000 22:40:49 sasl.login.retry.backoff.ms = 100 22:40:49 sasl.mechanism = PLAIN 22:40:49 sasl.oauthbearer.clock.skew.seconds = 30 22:40:49 sasl.oauthbearer.expected.audience = null 22:40:49 sasl.oauthbearer.expected.issuer = null 22:40:49 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:49 sasl.oauthbearer.jwks.endpoint.url = null 22:40:49 sasl.oauthbearer.scope.claim.name = scope 22:40:49 sasl.oauthbearer.sub.claim.name = sub 22:40:49 sasl.oauthbearer.token.endpoint.url = null 22:40:49 security.protocol = SASL_PLAINTEXT 22:40:49 security.providers = null 22:40:49 send.buffer.bytes = 131072 22:40:49 socket.connection.setup.timeout.max.ms = 30000 22:40:49 socket.connection.setup.timeout.ms = 10000 22:40:49 ssl.cipher.suites = null 22:40:49 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:49 ssl.endpoint.identification.algorithm = https 22:40:49 ssl.engine.factory.class = null 22:40:49 ssl.key.password = null 22:40:49 ssl.keymanager.algorithm = SunX509 22:40:49 ssl.keystore.certificate.chain = null 22:40:49 ssl.keystore.key = null 22:40:49 ssl.keystore.location = null 22:40:49 ssl.keystore.password = null 22:40:49 ssl.keystore.type = JKS 22:40:49 ssl.protocol = TLSv1.3 22:40:49 ssl.provider = null 22:40:49 ssl.secure.random.implementation = null 22:40:49 ssl.trustmanager.algorithm = PKIX 22:40:49 ssl.truststore.certificates = null 22:40:49 ssl.truststore.location = null 22:40:49 ssl.truststore.password = null 22:40:49 ssl.truststore.type = JKS 22:40:49 transaction.timeout.ms = 60000 22:40:49 transactional.id = null 22:40:49 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 22:40:49 22:40:49.550 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Instantiated an idempotent producer. 22:40:49 22:40:49.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:49 22:40:49.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:49 22:40:49.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192049552 22:40:49 22:40:49.552 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Kafka producer started 22:40:49 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 22:40:49 22:40:49.552 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.553 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 22:40:49 22:40:49.553 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.554 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.554 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.557 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.557 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.558 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.558 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 22:40:49 22:40:49.558 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 22:40:49 22:40:49.559 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.559 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.560 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.563 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 689512f0-5a3e-49c3-97fb-b00e6cd8ac47 url= /sdc/v1/artifactTypes 22:40:49 22:40:49.563 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Starting Kafka producer I/O thread. 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Transition from state UNINITIALIZED to INITIALIZING 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.565 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.566 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.566 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.579 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 22:40:49 java.net.UnknownHostException: proxy: System error 22:40:49 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 22:40:49 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 22:40:49 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 22:40:49 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 22:40:49 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 22:40:49 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 22:40:49 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 22:40:49 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 22:40:49 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 22:40:49 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 22:40:49 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 22:40:49 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 22:40:49 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:299) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:128) 22:40:49 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:49 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:49 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:49 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:49 at org.mockito.Answers.answer(Answers.java:99) 22:40:49 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:49 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:49 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:116) 22:40:49 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:49 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:49 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:49 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:49 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:49 22:40:49.579 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@2a58992e 22:40:49 22:40:49.579 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 22:40:49 22:40:49.579 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 22:40:49 22:40:49.580 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.581 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 42dd4ae5-21e1-4e3a-8fd5-a74d425827f3 url= /sdc/v1/artifactTypes 22:40:49 22:40:49.581 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 22:40:49 22:40:49.581 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.582 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 22:40:49 java.net.UnknownHostException: proxy 22:40:49 at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) 22:40:49 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 22:40:49 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 22:40:49 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 22:40:49 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 22:40:49 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 22:40:49 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 22:40:49 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 22:40:49 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 22:40:49 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 22:40:49 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 22:40:49 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 22:40:49 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 22:40:49 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:299) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:128) 22:40:49 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$z7O3kCzl.invokeWithArguments(Unknown Source) 22:40:49 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 22:40:49 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 22:40:49 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 22:40:49 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 22:40:49 at org.mockito.Answers.answer(Answers.java:99) 22:40:49 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 22:40:49 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 22:40:49 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 22:40:49 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 22:40:49 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:116) 22:40:49 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 22:40:49 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 22:40:49 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 22:40:49 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 22:40:49 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 22:40:49 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 22:40:49 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 22:40:49 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 22:40:49 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 22:40:49 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 22:40:49 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 22:40:49 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 22:40:49 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 22:40:49 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 22:40:49 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 22:40:49 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 22:40:49 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 22:40:49 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 22:40:49 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 22:40:49 22:40:49.582 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Node -1 disconnected. 22:40:49 22:40:49.582 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@7c6843f 22:40:49 22:40:49.582 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 22:40:49 22:40:49.582 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 22:40:49 22:40:49.582 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.582 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.582 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.582 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.582 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.584 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.584 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.585 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 22:40:49 22:40:49.585 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 22:40:49 22:40:49.585 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 22:40:49 22:40:49.585 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 22:40:49 22:40:49.585 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.585 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 22:40:49.587 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 22:40:49 22:40:49.587 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 22:40:49 22:40:49.587 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 22:40:49 22:40:49.587 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 22:40:49 22:40:49.587 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 22:40:49 22:40:49.587 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@6efe6f78 22:40:49 22:40:49.588 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 22:40:49 acks = -1 22:40:49 batch.size = 16384 22:40:49 bootstrap.servers = [localhost:9092] 22:40:49 buffer.memory = 33554432 22:40:49 client.dns.lookup = use_all_dns_ips 22:40:49 client.id = mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e 22:40:49 compression.type = none 22:40:49 connections.max.idle.ms = 540000 22:40:49 delivery.timeout.ms = 120000 22:40:49 enable.idempotence = true 22:40:49 interceptor.classes = [] 22:40:49 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 linger.ms = 0 22:40:49 max.block.ms = 60000 22:40:49 max.in.flight.requests.per.connection = 5 22:40:49 max.request.size = 1048576 22:40:49 metadata.max.age.ms = 300000 22:40:49 metadata.max.idle.ms = 300000 22:40:49 metric.reporters = [] 22:40:49 metrics.num.samples = 2 22:40:49 metrics.recording.level = INFO 22:40:49 metrics.sample.window.ms = 30000 22:40:49 partitioner.adaptive.partitioning.enable = true 22:40:49 partitioner.availability.timeout.ms = 0 22:40:49 partitioner.class = null 22:40:49 partitioner.ignore.keys = false 22:40:49 receive.buffer.bytes = 32768 22:40:49 reconnect.backoff.max.ms = 1000 22:40:49 reconnect.backoff.ms = 50 22:40:49 request.timeout.ms = 30000 22:40:49 retries = 2147483647 22:40:49 retry.backoff.ms = 100 22:40:49 sasl.client.callback.handler.class = null 22:40:49 sasl.jaas.config = [hidden] 22:40:49 sasl.kerberos.kinit.cmd = /usr/bin/kinit 22:40:49 sasl.kerberos.min.time.before.relogin = 60000 22:40:49 sasl.kerberos.service.name = null 22:40:49 sasl.kerberos.ticket.renew.jitter = 0.05 22:40:49 sasl.kerberos.ticket.renew.window.factor = 0.8 22:40:49 sasl.login.callback.handler.class = null 22:40:49 sasl.login.class = null 22:40:49 sasl.login.connect.timeout.ms = null 22:40:49 sasl.login.read.timeout.ms = null 22:40:49 sasl.login.refresh.buffer.seconds = 300 22:40:49 sasl.login.refresh.min.period.seconds = 60 22:40:49 sasl.login.refresh.window.factor = 0.8 22:40:49 sasl.login.refresh.window.jitter = 0.05 22:40:49 sasl.login.retry.backoff.max.ms = 10000 22:40:49 sasl.login.retry.backoff.ms = 100 22:40:49 sasl.mechanism = PLAIN 22:40:49 sasl.oauthbearer.clock.skew.seconds = 30 22:40:49 sasl.oauthbearer.expected.audience = null 22:40:49 sasl.oauthbearer.expected.issuer = null 22:40:49 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 22:40:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 22:40:49 sasl.oauthbearer.jwks.endpoint.url = null 22:40:49 sasl.oauthbearer.scope.claim.name = scope 22:40:49 sasl.oauthbearer.sub.claim.name = sub 22:40:49 sasl.oauthbearer.token.endpoint.url = null 22:40:49 security.protocol = SASL_PLAINTEXT 22:40:49 security.providers = null 22:40:49 send.buffer.bytes = 131072 22:40:49 socket.connection.setup.timeout.max.ms = 30000 22:40:49 socket.connection.setup.timeout.ms = 10000 22:40:49 ssl.cipher.suites = null 22:40:49 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 22:40:49 ssl.endpoint.identification.algorithm = https 22:40:49 ssl.engine.factory.class = null 22:40:49 ssl.key.password = null 22:40:49 ssl.keymanager.algorithm = SunX509 22:40:49 ssl.keystore.certificate.chain = null 22:40:49 ssl.keystore.key = null 22:40:49 ssl.keystore.location = null 22:40:49 ssl.keystore.password = null 22:40:49 ssl.keystore.type = JKS 22:40:49 ssl.protocol = TLSv1.3 22:40:49 ssl.provider = null 22:40:49 ssl.secure.random.implementation = null 22:40:49 ssl.trustmanager.algorithm = PKIX 22:40:49 ssl.truststore.certificates = null 22:40:49 ssl.truststore.location = null 22:40:49 ssl.truststore.password = null 22:40:49 ssl.truststore.type = JKS 22:40:49 transaction.timeout.ms = 60000 22:40:49 transactional.id = null 22:40:49 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 22:40:49 22:40:49 22:40:49.588 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Instantiated an idempotent producer. 22:40:49 22:40:49.599 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 22:40:49 22:40:49.599 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 22:40:49 22:40:49.599 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1731192049599 22:40:49 22:40:49.599 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Kafka producer started 22:40:49 22:40:49.600 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Starting Kafka producer I/O thread. 22:40:49 22:40:49.600 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Transition from state UNINITIALIZED to INITIALIZING 22:40:49 22:40:49.600 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.605 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.605 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.605 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.606 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.606 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.610 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.610 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Node -1 disconnected. 22:40:49 22:40:49.610 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.610 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.610 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.616 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.616 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.619 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.620 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.620 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.620 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Give up sending metadata request since no node is available 22:40:49 Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 22:40:49 22:40:49.646 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.653 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 22:40:49 22:40:49.654 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 22:40:49 22:40:49.654 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 22:40:49 22:40:49.654 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 22:40:49 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.371 s - in org.onap.sdc.impl.DistributionClientTest 22:40:49 22:40:49.670 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.671 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.671 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Give up sending metadata request since no node is available 22:40:49 22:40:49.682 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.682 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.682 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.682 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.683 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.683 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.683 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.684 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Node -1 disconnected. 22:40:49 22:40:49.684 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.684 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.684 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.710 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.711 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.711 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.711 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.711 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.711 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.712 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.712 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Node -1 disconnected. 22:40:49 22:40:49.712 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.712 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.712 [kafka-producer-network-thread | mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-16501895-ca7a-439e-9391-c7892701431e] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.716 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] Give up sending metadata request since no node is available 22:40:49 22:40:49.716 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f602b769-9f55-4ea1-8cc7-97c74a0ca415, groupId=mso-group] No broker available to send FindCoordinator request 22:40:49 22:40:49.720 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.721 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.721 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Give up sending metadata request since no node is available 22:40:49 22:40:49.770 [kafka-producer-network-thread | mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-d145f6b7-99f8-4d35-97d5-4bd60a6c66bb] Give up sending metadata request since no node is available 22:40:49 22:40:49.771 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 22:40:49 22:40:49.771 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 22:40:49 22:40:49.771 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 22:40:49 22:40:49.771 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Set SASL client state to SEND_APIVERSIONS_REQUEST 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 22:40:49 java.net.ConnectException: Connection refused 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 22:40:49 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 22:40:49 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 22:40:49 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 22:40:49 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 22:40:49 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 22:40:49 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Node -1 disconnected. 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 22:40:49 22:40:49.772 [kafka-producer-network-thread | mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-904f645b-c254-4444-bf4e-9090f7538fd1] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 22:40:49 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 22:40:49 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 22:40:49 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 22:40:49 at java.base/java.lang.Thread.run(Thread.java:829) 22:40:49 22:40:49.784 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.784 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 22:40:49 22:40:49.784 [kafka-producer-network-thread | mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c83a5bd0-cd66-417b-b438-64c8924a0670] Give up sending metadata request since no node is available 22:40:50 [INFO] 22:40:50 [INFO] Results: 22:40:50 [INFO] 22:40:50 [INFO] Tests run: 70, Failures: 0, Errors: 0, Skipped: 0 22:40:50 [INFO] 22:40:50 [INFO] 22:40:50 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- 22:40:50 [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/code-coverage/jacoco-ut.exec 22:40:50 [INFO] Analyzed bundle 'sdc-distribution-client' with 45 classes 22:40:50 [INFO] 22:40:50 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- 22:40:50 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT.jar 22:40:50 [INFO] 22:40:50 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- 22:40:50 [INFO] No previous run data found, generating javadoc. 22:40:53 [INFO] 22:40:53 Loading source files for package org.onap.sdc.api.consumer... 22:40:53 Loading source files for package org.onap.sdc.api... 22:40:53 Loading source files for package org.onap.sdc.api.notification... 22:40:53 Loading source files for package org.onap.sdc.api.results... 22:40:53 Loading source files for package org.onap.sdc.http... 22:40:53 Loading source files for package org.onap.sdc.utils... 22:40:53 Loading source files for package org.onap.sdc.utils.kafka... 22:40:53 Loading source files for package org.onap.sdc.utils.heat... 22:40:53 Loading source files for package org.onap.sdc.impl... 22:40:53 Constructing Javadoc information... 22:40:53 Standard Doclet version 11.0.16 22:40:53 Building tree for all the packages and classes... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/constant-values.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/serialized-form.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... 22:40:53 Building index for all the packages and classes... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/overview-tree.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/index-all.html... 22:40:53 Building index for all classes... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/allclasses-index.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/allpackages-index.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/deprecated-list.html... 22:40:53 Building index for all classes... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/allclasses.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/allclasses.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/index.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/overview-summary.html... 22:40:53 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/help-doc.html... 22:40:53 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT-javadoc.jar 22:40:53 [INFO] 22:40:53 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- 22:40:53 [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:53 [INFO] 22:40:53 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- 22:40:53 [INFO] 22:40:53 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- 22:40:53 [INFO] Skipping JaCoCo execution due to missing execution data file. 22:40:53 [INFO] 22:40:53 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- 22:40:53 [INFO] 22:40:53 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- 22:40:53 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.1-SNAPSHOT/sdc-distribution-client-2.1.1-SNAPSHOT.jar 22:40:53 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.1-SNAPSHOT/sdc-distribution-client-2.1.1-SNAPSHOT.pom 22:40:53 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT-javadoc.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.1-SNAPSHOT/sdc-distribution-client-2.1.1-SNAPSHOT-javadoc.jar 22:40:53 [INFO] 22:40:53 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ sdc-distribution-client --- 22:40:53 [INFO] org.onap.sdc.sdc-distribution-client:sdc-distribution-client:jar:2.1.1-SNAPSHOT 22:40:53 [INFO] +- org.apache.kafka:kafka-clients:jar:3.3.1:compile 22:40:53 [INFO] | +- com.github.luben:zstd-jni:jar:1.5.2-1:runtime 22:40:53 [INFO] | +- org.lz4:lz4-java:jar:1.8.0:runtime 22:40:53 [INFO] | \- org.xerial.snappy:snappy-java:jar:1.1.8.4:runtime 22:40:53 [INFO] +- com.fasterxml.jackson.core:jackson-core:jar:2.15.2:compile 22:40:53 [INFO] +- com.fasterxml.jackson.core:jackson-databind:jar:2.15.2:compile 22:40:53 [INFO] +- com.fasterxml.jackson.core:jackson-annotations:jar:2.15.2:compile 22:40:53 [INFO] +- org.projectlombok:lombok:jar:1.18.24:compile 22:40:53 [INFO] +- org.json:json:jar:20220320:compile 22:40:53 [INFO] +- org.slf4j:slf4j-api:jar:1.7.30:compile 22:40:53 [INFO] +- com.google.code.gson:gson:jar:2.8.9:compile 22:40:53 [INFO] +- org.functionaljava:functionaljava:jar:4.8:compile 22:40:53 [INFO] +- commons-io:commons-io:jar:2.8.0:compile 22:40:53 [INFO] +- org.apache.httpcomponents:httpclient:jar:4.5.13:compile 22:40:53 [INFO] | \- commons-logging:commons-logging:jar:1.2:compile 22:40:53 [INFO] +- org.yaml:snakeyaml:jar:1.30:compile 22:40:53 [INFO] +- org.apache.httpcomponents:httpcore:jar:4.4.15:compile 22:40:53 [INFO] +- com.google.guava:guava:jar:32.1.2-jre:compile 22:40:53 [INFO] | +- com.google.guava:failureaccess:jar:1.0.1:compile 22:40:53 [INFO] | +- com.google.guava:listenablefuture:jar:9999.0-empty-to-avoid-conflict-with-guava:compile 22:40:53 [INFO] | +- com.google.code.findbugs:jsr305:jar:3.0.2:compile 22:40:53 [INFO] | +- org.checkerframework:checker-qual:jar:3.33.0:compile 22:40:53 [INFO] | +- com.google.errorprone:error_prone_annotations:jar:2.18.0:compile 22:40:53 [INFO] | \- com.google.j2objc:j2objc-annotations:jar:2.8:compile 22:40:53 [INFO] +- org.eclipse.jetty:jetty-servlet:jar:9.4.48.v20220622:test 22:40:53 [INFO] | \- org.eclipse.jetty:jetty-util-ajax:jar:9.4.48.v20220622:test 22:40:53 [INFO] +- org.eclipse.jetty:jetty-webapp:jar:9.4.48.v20220622:test 22:40:53 [INFO] | \- org.eclipse.jetty:jetty-xml:jar:9.4.48.v20220622:test 22:40:53 [INFO] | \- org.eclipse.jetty:jetty-util:jar:9.4.48.v20220622:test 22:40:53 [INFO] +- org.junit.jupiter:junit-jupiter:jar:5.7.2:test 22:40:53 [INFO] | +- org.junit.jupiter:junit-jupiter-api:jar:5.7.2:test 22:40:53 [INFO] | | +- org.apiguardian:apiguardian-api:jar:1.1.0:test 22:40:53 [INFO] | | \- org.opentest4j:opentest4j:jar:1.2.0:test 22:40:53 [INFO] | +- org.junit.jupiter:junit-jupiter-params:jar:5.7.2:test 22:40:53 [INFO] | \- org.junit.jupiter:junit-jupiter-engine:jar:5.7.2:test 22:40:53 [INFO] | \- org.junit.platform:junit-platform-engine:jar:1.7.2:test 22:40:53 [INFO] +- org.mockito:mockito-junit-jupiter:jar:3.12.4:test 22:40:53 [INFO] +- org.mockito:mockito-inline:jar:3.12.4:test 22:40:53 [INFO] +- org.junit-pioneer:junit-pioneer:jar:1.4.2:test 22:40:53 [INFO] | +- org.junit.platform:junit-platform-commons:jar:1.7.1:test 22:40:53 [INFO] | \- org.junit.platform:junit-platform-launcher:jar:1.7.1:test 22:40:53 [INFO] +- org.mockito:mockito-core:jar:3.12.4:test 22:40:53 [INFO] | +- net.bytebuddy:byte-buddy:jar:1.11.13:test 22:40:53 [INFO] | +- net.bytebuddy:byte-buddy-agent:jar:1.11.13:test 22:40:53 [INFO] | \- org.objenesis:objenesis:jar:3.2:test 22:40:53 [INFO] +- com.google.code.bean-matchers:bean-matchers:jar:0.12:test 22:40:53 [INFO] | \- org.hamcrest:hamcrest:jar:2.2:test 22:40:53 [INFO] +- org.assertj:assertj-core:jar:3.18.1:test 22:40:53 [INFO] +- io.github.hakky54:logcaptor:jar:2.7.10:test 22:40:53 [INFO] | +- ch.qos.logback:logback-classic:jar:1.2.3:test 22:40:53 [INFO] | | \- ch.qos.logback:logback-core:jar:1.2.3:test 22:40:53 [INFO] | +- org.apache.logging.log4j:log4j-to-slf4j:jar:2.17.2:test 22:40:53 [INFO] | | \- org.apache.logging.log4j:log4j-api:jar:2.17.2:test 22:40:53 [INFO] | \- org.slf4j:jul-to-slf4j:jar:1.7.36:test 22:40:53 [INFO] +- com.salesforce.kafka.test:kafka-junit5:jar:3.2.4:test 22:40:53 [INFO] | +- com.salesforce.kafka.test:kafka-junit-core:jar:3.2.4:test 22:40:53 [INFO] | \- org.apache.curator:curator-test:jar:2.12.0:test 22:40:53 [INFO] | \- org.javassist:javassist:jar:3.18.1-GA:test 22:40:53 [INFO] \- org.apache.kafka:kafka_2.13:jar:3.3.1:test 22:40:53 [INFO] +- org.scala-lang:scala-library:jar:2.13.8:test 22:40:53 [INFO] +- org.apache.kafka:kafka-server-common:jar:3.3.1:test 22:40:53 [INFO] +- org.apache.kafka:kafka-metadata:jar:3.3.1:test 22:40:53 [INFO] +- org.apache.kafka:kafka-raft:jar:3.3.1:test 22:40:53 [INFO] +- org.apache.kafka:kafka-storage:jar:3.3.1:test 22:40:53 [INFO] | \- org.apache.kafka:kafka-storage-api:jar:3.3.1:test 22:40:53 [INFO] +- net.sourceforge.argparse4j:argparse4j:jar:0.7.0:test 22:40:53 [INFO] +- net.sf.jopt-simple:jopt-simple:jar:5.0.4:test 22:40:53 [INFO] +- org.bitbucket.b_c:jose4j:jar:0.7.9:test 22:40:53 [INFO] +- com.yammer.metrics:metrics-core:jar:2.2.0:test 22:40:53 [INFO] +- org.scala-lang.modules:scala-collection-compat_2.13:jar:2.6.0:test 22:40:53 [INFO] +- org.scala-lang.modules:scala-java8-compat_2.13:jar:1.0.2:test 22:40:53 [INFO] +- org.scala-lang:scala-reflect:jar:2.13.8:test 22:40:53 [INFO] +- com.typesafe.scala-logging:scala-logging_2.13:jar:3.9.4:test 22:40:53 [INFO] +- io.dropwizard.metrics:metrics-core:jar:4.1.12.1:test 22:40:53 [INFO] +- org.apache.zookeeper:zookeeper:jar:3.6.3:test 22:40:53 [INFO] | +- org.apache.zookeeper:zookeeper-jute:jar:3.6.3:test 22:40:53 [INFO] | +- org.apache.yetus:audience-annotations:jar:0.5.0:test 22:40:53 [INFO] | +- io.netty:netty-handler:jar:4.1.63.Final:test 22:40:53 [INFO] | | +- io.netty:netty-common:jar:4.1.63.Final:test 22:40:53 [INFO] | | +- io.netty:netty-resolver:jar:4.1.63.Final:test 22:40:53 [INFO] | | +- io.netty:netty-buffer:jar:4.1.63.Final:test 22:40:53 [INFO] | | +- io.netty:netty-transport:jar:4.1.63.Final:test 22:40:53 [INFO] | | \- io.netty:netty-codec:jar:4.1.63.Final:test 22:40:53 [INFO] | \- io.netty:netty-transport-native-epoll:jar:4.1.63.Final:test 22:40:53 [INFO] | \- io.netty:netty-transport-native-unix-common:jar:4.1.63.Final:test 22:40:53 [INFO] \- commons-cli:commons-cli:jar:1.4:test 22:40:53 [INFO] 22:40:53 [INFO] --- clm-maven-plugin:2.48.3-01:index (default-cli) @ sdc-distribution-client --- 22:40:53 [INFO] Saved module information to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sonatype-clm/module.xml 22:40:53 [INFO] 22:40:53 [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ 22:40:53 [INFO] Building sdc-distribution-ci 2.1.1-SNAPSHOT [3/3] 22:40:53 [INFO] --------------------------------[ jar ]--------------------------------- 22:40:54 [INFO] 22:40:54 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- 22:40:54 [INFO] surefireArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:54 [INFO] 22:40:54 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- 22:40:54 [INFO] argLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:54 [INFO] 22:40:54 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- 22:40:54 [INFO] Using 'UTF-8' encoding to copy filtered resources. 22:40:54 [INFO] Copying 1 resource 22:40:54 [INFO] 22:40:54 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- 22:40:54 [INFO] Changes detected - recompiling the module! 22:40:54 [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/classes 22:40:54 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. 22:40:54 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. 22:40:54 [INFO] 22:40:54 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- 22:40:54 [INFO] Using 'UTF-8' encoding to copy filtered resources. 22:40:54 [INFO] Copying 2 resources 22:40:54 [INFO] 22:40:54 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- 22:40:54 [INFO] Changes detected - recompiling the module! 22:40:54 [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/test-classes 22:40:54 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. 22:40:54 [INFO] /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. 22:40:54 [INFO] 22:40:54 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- 22:40:54 [INFO] 22:40:54 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- 22:40:54 [INFO] Skipping JaCoCo execution due to missing execution data file. 22:40:54 [INFO] 22:40:54 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- 22:40:54 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization.jar 22:40:54 [INFO] 22:40:54 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- 22:40:54 [INFO] No previous run data found, generating javadoc. 22:40:56 [INFO] 22:40:56 Loading source files for package org.onap.test.core.service... 22:40:56 Loading source files for package org.onap.test.core.config... 22:40:56 Loading source files for package org.onap.test.it... 22:40:56 Constructing Javadoc information... 22:40:56 Standard Doclet version 11.0.16 22:40:56 Building tree for all the packages and classes... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/constant-values.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... 22:40:56 Building index for all the packages and classes... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/overview-tree.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/index-all.html... 22:40:56 Building index for all classes... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/allclasses-index.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/allpackages-index.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/deprecated-list.html... 22:40:56 Building index for all classes... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/allclasses.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/allclasses.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/index.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/overview-summary.html... 22:40:56 Generating /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/help-doc.html... 22:40:56 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization-javadoc.jar 22:40:56 [INFO] 22:40:56 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- 22:40:56 [INFO] failsafeArgLine set to -javaagent:/tmp/r/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 22:40:56 [INFO] 22:40:56 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- 22:40:56 [INFO] 22:40:56 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- 22:40:56 [INFO] Skipping JaCoCo execution due to missing execution data file. 22:40:56 [INFO] 22:40:56 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- 22:40:56 [INFO] 22:40:56 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- 22:40:56 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.1-SNAPSHOT/sdc-distribution-ci-2.1.1-SNAPSHOT.jar 22:40:56 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/pom.xml to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.1-SNAPSHOT/sdc-distribution-ci-2.1.1-SNAPSHOT.pom 22:40:56 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization-javadoc.jar to /tmp/r/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.1-SNAPSHOT/sdc-distribution-ci-2.1.1-SNAPSHOT-javadoc.jar 22:40:56 [INFO] 22:40:56 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ sdc-distribution-ci --- 22:40:56 [INFO] org.onap.sdc.sdc-distribution-client:sdc-distribution-ci:jar:2.1.1-SNAPSHOT 22:40:56 [INFO] +- org.testcontainers:kafka:jar:1.17.1:test 22:40:56 [INFO] +- org.onap.sdc.sdc-distribution-client:sdc-distribution-client:jar:2.1.1-SNAPSHOT:compile 22:40:56 [INFO] | +- org.apache.kafka:kafka-clients:jar:3.3.1:compile 22:40:56 [INFO] | | +- com.github.luben:zstd-jni:jar:1.5.2-1:runtime 22:40:56 [INFO] | | +- org.lz4:lz4-java:jar:1.8.0:runtime 22:40:56 [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.1.8.4:runtime 22:40:56 [INFO] | +- org.projectlombok:lombok:jar:1.18.24:compile 22:40:56 [INFO] | +- org.json:json:jar:20220320:compile 22:40:56 [INFO] | +- com.google.code.gson:gson:jar:2.8.9:compile 22:40:56 [INFO] | +- org.functionaljava:functionaljava:jar:4.8:compile 22:40:56 [INFO] | +- commons-io:commons-io:jar:2.8.0:compile 22:40:56 [INFO] | \- org.yaml:snakeyaml:jar:1.30:compile 22:40:56 [INFO] +- ch.qos.logback:logback-classic:jar:1.2.11:test 22:40:56 [INFO] | \- ch.qos.logback:logback-core:jar:1.2.11:test 22:40:56 [INFO] +- org.slf4j:slf4j-api:jar:1.7.36:compile 22:40:56 [INFO] +- org.junit.jupiter:junit-jupiter-api:jar:5.7.2:test 22:40:56 [INFO] | +- org.apiguardian:apiguardian-api:jar:1.1.0:test 22:40:56 [INFO] | +- org.opentest4j:opentest4j:jar:1.2.0:test 22:40:56 [INFO] | \- org.junit.platform:junit-platform-commons:jar:1.7.2:test 22:40:56 [INFO] +- org.junit.jupiter:junit-jupiter-params:jar:5.7.2:test 22:40:56 [INFO] +- org.junit.jupiter:junit-jupiter-engine:jar:5.7.2:test 22:40:56 [INFO] | \- org.junit.platform:junit-platform-engine:jar:1.7.2:test 22:40:56 [INFO] +- org.testcontainers:testcontainers:jar:1.17.1:test 22:40:56 [INFO] | +- org.apache.commons:commons-compress:jar:1.21:test 22:40:56 [INFO] | +- org.rnorth.duct-tape:duct-tape:jar:1.0.8:test 22:40:56 [INFO] | | \- org.jetbrains:annotations:jar:17.0.0:test 22:40:56 [INFO] | +- com.github.docker-java:docker-java-api:jar:3.2.13:test 22:40:56 [INFO] | \- com.github.docker-java:docker-java-transport-zerodep:jar:3.2.13:test 22:40:56 [INFO] | +- com.github.docker-java:docker-java-transport:jar:3.2.13:test 22:40:56 [INFO] | \- net.java.dev.jna:jna:jar:5.8.0:test 22:40:56 [INFO] +- org.junit.vintage:junit-vintage-engine:jar:5.7.2:test 22:40:56 [INFO] | \- junit:junit:jar:4.13:test 22:40:56 [INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:test 22:40:56 [INFO] +- org.testcontainers:junit-jupiter:jar:1.17.1:test 22:40:56 [INFO] +- com.fasterxml.jackson.core:jackson-annotations:jar:2.15.2:compile 22:40:56 [INFO] +- org.mockito:mockito-core:jar:3.12.4:test 22:40:56 [INFO] | +- net.bytebuddy:byte-buddy:jar:1.11.13:test 22:40:56 [INFO] | +- net.bytebuddy:byte-buddy-agent:jar:1.11.13:test 22:40:56 [INFO] | \- org.objenesis:objenesis:jar:3.2:test 22:40:56 [INFO] +- org.assertj:assertj-core:jar:3.23.1:test 22:40:56 [INFO] +- org.mockito:mockito-junit-jupiter:jar:3.12.4:test 22:40:56 [INFO] +- org.awaitility:awaitility:jar:4.2.0:test 22:40:56 [INFO] | \- org.hamcrest:hamcrest:jar:2.1:test 22:40:56 [INFO] +- org.apache.httpcomponents:httpclient:jar:4.5.13:runtime 22:40:56 [INFO] | +- org.apache.httpcomponents:httpcore:jar:4.4.13:runtime 22:40:56 [INFO] | +- commons-logging:commons-logging:jar:1.2:runtime 22:40:56 [INFO] | \- commons-codec:commons-codec:jar:1.11:runtime 22:40:56 [INFO] \- org.junit-pioneer:junit-pioneer:jar:1.4.2:test 22:40:56 [INFO] \- org.junit.platform:junit-platform-launcher:jar:1.7.1:test 22:40:56 [INFO] 22:40:56 [INFO] --- clm-maven-plugin:2.48.3-01:index (default-cli) @ sdc-distribution-ci --- 22:40:56 [INFO] Saved module information to /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/sonatype-clm/module.xml 22:40:56 [INFO] ------------------------------------------------------------------------ 22:40:56 [INFO] Reactor Summary: 22:40:56 [INFO] 22:40:56 [INFO] sdc-sdc-distribution-client 2.1.1-SNAPSHOT ......... SUCCESS [ 11.011 s] 22:40:56 [INFO] sdc-distribution-client ............................ SUCCESS [ 54.951 s] 22:40:56 [INFO] sdc-distribution-ci 2.1.1-SNAPSHOT ................. SUCCESS [ 3.165 s] 22:40:56 [INFO] ------------------------------------------------------------------------ 22:40:56 [INFO] BUILD SUCCESS 22:40:56 [INFO] ------------------------------------------------------------------------ 22:40:56 [INFO] Total time: 01:11 min 22:40:56 [INFO] Finished at: 2024-11-09T22:40:56Z 22:40:56 [INFO] ------------------------------------------------------------------------ 22:40:56 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins14005326840390653459.sh 22:40:56 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/sh -xe /tmp/jenkins10529711462957455745.sh 22:40:56 + find . -regex .*karaf/target 22:40:56 + xargs rm -rf 22:41:00 [INFO] 2024-11-09T22:41:00.718Z Scanning application onap-sdc-sdc-distribution-client. 22:41:00 [INFO] Discovered commit hash '09a9061fb4eef8a7b54fb35ae9391837939ea155' via environment variable GIT_COMMIT 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/member-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/package-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/type-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization-javadoc.jar 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization.jar 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/member-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/package-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/type-search-index.zip 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT-javadoc.jar 22:41:01 [INFO] Scan target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT.jar 22:41:01 [INFO] Scan configuration properties: 22:41:01 [INFO] dirExcludes=null 22:41:01 [INFO] dirIncludes=null 22:41:01 [INFO] fileExcludes= 22:41:01 [INFO] fileIncludes= 22:41:01 [INFO] 2024-11-09T22:41:01.184Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/member-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.568Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/package-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.572Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/apidocs/type-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.574Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization-javadoc.jar 22:41:01 [INFO] 2024-11-09T22:41:01.769Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-ci/target/client-initialization.jar 22:41:01 [INFO] 2024-11-09T22:41:01.949Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/member-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.951Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/package-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.953Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/apidocs/type-search-index.zip 22:41:01 [INFO] 2024-11-09T22:41:01.955Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT-javadoc.jar 22:41:02 [INFO] 2024-11-09T22:41:02.056Z Starting scanning target: /w/workspace/sdc-sdc-distribution-client-maven-clm-master/sdc-distribution-client/target/sdc-distribution-client-2.1.1-SNAPSHOT.jar 22:41:02 [INFO] 2024-11-09T22:41:02.133Z Scanned 425 total files 22:41:05 [ERROR] Could not parse class file FastDoubleSwar.class 22:41:05 java.lang.IllegalArgumentException: Unsupported class file major version 63 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:196) 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:177) 22:41:05 at org.objectweb.asm.Asm90ClassReader.(Asm90ClassReader.java:15) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.anAsm90ClassNodeFrom(AsmClassFactory.java:121) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNodeForJava14plus(AsmClassFactory.java:58) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNode(AsmClassFactory.java:42) 22:41:05 at com.sonatype.insight.scan.hash.internal.JavaDigester.digest(JavaDigester.java:90) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:75) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:54) 22:41:05 at com.sonatype.insight.scan.file.ScanUtils.setHash(ScanUtils.java:79) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.setScanItemHash(FileVisitor.java:552) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.visitFile(FileVisitor.java:278) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:86) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.walk(FileWalker.java:35) 22:41:05 at com.sonatype.insight.scan.file.FileScanner.scan(FileScanner.java:192) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scanModules(Scanner.java:191) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scan(Scanner.java:114) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:312) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:290) 22:41:05 at com.sonatype.nexus.api.iq.internal.InternalIqClient$scan.call(Unknown Source) 22:41:05 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) 22:41:05 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy:97) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:211) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:54) 22:41:05 at hudson.remoting.Request$2.run(Request.java:377) 22:41:05 at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78) 22:41:05 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 22:41:05 at java.base/java.lang.Thread.run(Thread.java:829) 22:41:05 [ERROR] Could not parse class file FastIntegerMath$UInt128.class 22:41:05 java.lang.IllegalArgumentException: Unsupported class file major version 63 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:196) 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:177) 22:41:05 at org.objectweb.asm.Asm90ClassReader.(Asm90ClassReader.java:15) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.anAsm90ClassNodeFrom(AsmClassFactory.java:121) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNodeForJava14plus(AsmClassFactory.java:58) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNode(AsmClassFactory.java:42) 22:41:05 at com.sonatype.insight.scan.hash.internal.JavaDigester.digest(JavaDigester.java:90) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:75) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:54) 22:41:05 at com.sonatype.insight.scan.file.ScanUtils.setHash(ScanUtils.java:79) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.setScanItemHash(FileVisitor.java:552) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.visitFile(FileVisitor.java:278) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:86) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.walk(FileWalker.java:35) 22:41:05 at com.sonatype.insight.scan.file.FileScanner.scan(FileScanner.java:192) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scanModules(Scanner.java:191) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scan(Scanner.java:114) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:312) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:290) 22:41:05 at com.sonatype.nexus.api.iq.internal.InternalIqClient$scan.call(Unknown Source) 22:41:05 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) 22:41:05 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy:97) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:211) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:54) 22:41:05 at hudson.remoting.Request$2.run(Request.java:377) 22:41:05 at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78) 22:41:05 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 22:41:05 at java.base/java.lang.Thread.run(Thread.java:829) 22:41:05 [ERROR] Could not parse class file FastIntegerMath.class 22:41:05 java.lang.IllegalArgumentException: Unsupported class file major version 63 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:196) 22:41:05 at org.objectweb.asm.ClassReader.(ClassReader.java:177) 22:41:05 at org.objectweb.asm.Asm90ClassReader.(Asm90ClassReader.java:15) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.anAsm90ClassNodeFrom(AsmClassFactory.java:121) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNodeForJava14plus(AsmClassFactory.java:58) 22:41:05 at com.sonatype.insight.scan.hash.internal.asm.AsmClassFactory.newClassNode(AsmClassFactory.java:42) 22:41:05 at com.sonatype.insight.scan.hash.internal.JavaDigester.digest(JavaDigester.java:90) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:75) 22:41:05 at com.sonatype.insight.scan.hash.internal.DefaultDigester.digest(DefaultDigester.java:54) 22:41:05 at com.sonatype.insight.scan.file.ScanUtils.setHash(ScanUtils.java:79) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.setScanItemHash(FileVisitor.java:552) 22:41:05 at com.sonatype.insight.scan.file.FileVisitor.visitFile(FileVisitor.java:278) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:86) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.doWalk(FileWalker.java:75) 22:41:05 at com.sonatype.insight.scan.file.FileWalker.walk(FileWalker.java:35) 22:41:05 at com.sonatype.insight.scan.file.FileScanner.scan(FileScanner.java:192) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scanModules(Scanner.java:191) 22:41:05 at com.sonatype.nexus.api.iq.scan.Scanner.scan(Scanner.java:114) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:312) 22:41:05 at com.sonatype.nexus.api.iq.impl.DefaultIqClient.scan(DefaultIqClient.java:290) 22:41:05 at com.sonatype.nexus.api.iq.internal.InternalIqClient$scan.call(Unknown Source) 22:41:05 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) 22:41:05 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy:97) 22:41:05 at org.sonatype.nexus.ci.iq.RemoteScanner.call(RemoteScanner.groovy) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:211) 22:41:05 at hudson.remoting.UserRequest.perform(UserRequest.java:54) 22:41:05 at hudson.remoting.Request$2.run(Request.java:377) 22:41:05 at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78) 22:41:05 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 22:41:05 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 22:41:05 at java.base/java.lang.Thread.run(Thread.java:829) 22:41:06 [INFO] 2024-11-09T22:41:06.503Z Finished scanning application onap-sdc-sdc-distribution-client. 22:41:06 [INFO] Discovered repository url '$GIT_URL/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client' via environment variable GIT_URL 22:41:06 [INFO] Repository URL $GIT_URL/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client was found using automation 22:41:06 [INFO] Amending source control record for application with id: onap-sdc-sdc-distribution-client with discovered Repository URL: $GIT_URL/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client/sdc/sdc-distribution-client 22:41:07 [INFO] 2024-11-09T22:41:07.062Z Evaluating application onap-sdc-sdc-distribution-client for stage build. 22:41:07 [INFO] Waiting for policy evaluation to complete... 22:41:13 [INFO] Assigned scan ID 30168e97838b42cbb26c20c2119ff3ec 22:41:19 [INFO] Policy evaluation completed in 12 seconds. 22:41:19 [INFO] 2024-11-09T22:41:19.209Z Finished evaluating application onap-sdc-sdc-distribution-client for stage build. 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Use restrictions) [ 22:41:19 Component(displayName=org.json : json : 20220320, hash=06df2c050972619466f6) [ 22:41:19 Constraint(Use restrictions) [License Threat Group is 'Use Restrictions' because: Found licenses in the 'Use Restrictions' license threat group ('JSON')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Weak copyleft - non-LGPL) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Weak Copyleft - non-LGPL) [License Threat Group is 'Weak Copyleft - non-LGPL' because: Found licenses in the 'Weak Copyleft - non-LGPL' license threat group ('EPL-1.0', 'EPL-2.0')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Weak copyleft - LGPL) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Weak Copyleft - LGPL) [License Threat Group is 'Weak Copyleft - LGPL' because: Found licenses in the 'Weak Copyleft - LGPL' license threat group ('LGPL-2.1', 'LGPL-3.0')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Severe vulnerabilities) [ 22:41:19 Component(displayName=commons-codec : commons-codec : 1.11, hash=3acb4705652e16236558) [ 22:41:19 Constraint(Severe security vulnerability) [Security Vulnerability Severity >= 4 because: Found security vulnerability sonatype-2012-0050 with severity >= 4 (severity = 5.3), Security Vulnerability Severity <= 6 because: Found security vulnerability sonatype-2012-0050 with severity <= 6 (severity = 5.3)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Severe vulnerabilities) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Severe security vulnerability) [Security Vulnerability Severity >= 4 because: Found security vulnerability CVE-2022-38750 with severity >= 4 (severity = 5.5), Security Vulnerability Severity <= 6 because: Found security vulnerability CVE-2022-38750 with severity <= 6 (severity = 5.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Strong copyleft) [ 22:41:19 Component(displayName=jszip 3.7.1, hash=6917cd0e2ecd41447ad8) [ 22:41:19 Constraint(Strong copyleft) [License Threat Group is 'Strong Copyleft' because: Found licenses in the 'Strong Copyleft' license threat group ('GPL-3.0')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Strong copyleft) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Strong copyleft) [License Threat Group is 'Strong Copyleft' because: Found licenses in the 'Strong Copyleft' license threat group ('GPL-2.0', 'GPL-3.0')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(License - Unknown license) [ 22:41:19 Component(displayName=com.github.luben : zstd-jni : 1.5.2-1, hash=fad786abc1d1b81570e8) [ 22:41:19 Constraint(Unknown license) [License Threat Group is 'License not determined' because: Found licenses in the 'License not determined' license threat group ('No Source License')] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.json : json : 20220320, hash=06df2c050972619466f6) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2022-45688 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.json : json : 20220320, hash=06df2c050972619466f6) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2023-5072 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.xerial.snappy : snappy-java : 1.1.8.4, hash=66f0d56454509f6e3617) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2023-34453 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.xerial.snappy : snappy-java : 1.1.8.4, hash=66f0d56454509f6e3617) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2023-34454 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.xerial.snappy : snappy-java : 1.1.8.4, hash=66f0d56454509f6e3617) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2023-34455 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=jszip 3.7.1, hash=6917cd0e2ecd41447ad8) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability sonatype-2023-0042 with severity >= 7 (severity = 8.2)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2022-1471 with severity >= 7 (severity = 9.8)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.yaml : snakeyaml : 1.30, hash=8fde7fe2586328ac3c68) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2022-25857 with severity >= 7 (severity = 7.5)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=commons-io : commons-io : 2.8.0, hash=92999e26e6534606b567) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2024-47554 with severity >= 7 (severity = 8.7)] ]] 22:41:19 22:41:19 Nexus IQ reports policy warning due to 22:41:19 Policy(Security - Critical vulnerabilities) [ 22:41:19 Component(displayName=org.apache.kafka : kafka-clients : 3.3.1, hash=aea4008ab34761ef8057) [ 22:41:19 Constraint(Critical security vulnerability) [Security Vulnerability Severity >= 7 because: Found security vulnerability CVE-2023-25194 with severity >= 7 (severity = 8.8)] ]] 22:41:19 The detailed report can be viewed online at https://nexus-iq.wl.linuxfoundation.org/ui/links/application/onap-sdc-sdc-distribution-client/report/30168e97838b42cbb26c20c2119ff3ec 22:41:19 Summary of policy violations: 12 critical, 5 severe, 1 moderate 22:41:19 IQ Server evaluation of application onap-sdc-sdc-distribution-client detected warnings 22:41:19 Build step 'Invoke Nexus Policy Evaluation' changed build result to UNSTABLE 22:41:19 $ ssh-agent -k 22:41:19 unset SSH_AUTH_SOCK; 22:41:19 unset SSH_AGENT_PID; 22:41:19 echo Agent pid 1650 killed; 22:41:19 [ssh-agent] Stopped. 22:41:19 [PostBuildScript] - [INFO] Executing post build scripts. 22:41:19 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins8835043643550773974.sh 22:41:19 ---> sysstat.sh 22:41:19 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins2548428061025583429.sh 22:41:19 ---> package-listing.sh 22:41:19 ++ facter osfamily 22:41:19 ++ tr '[:upper:]' '[:lower:]' 22:41:20 + OS_FAMILY=debian 22:41:20 + workspace=/w/workspace/sdc-sdc-distribution-client-maven-clm-master 22:41:20 + START_PACKAGES=/tmp/packages_start.txt 22:41:20 + END_PACKAGES=/tmp/packages_end.txt 22:41:20 + DIFF_PACKAGES=/tmp/packages_diff.txt 22:41:20 + PACKAGES=/tmp/packages_start.txt 22:41:20 + '[' /w/workspace/sdc-sdc-distribution-client-maven-clm-master ']' 22:41:20 + PACKAGES=/tmp/packages_end.txt 22:41:20 + case "${OS_FAMILY}" in 22:41:20 + dpkg -l 22:41:20 + grep '^ii' 22:41:20 + '[' -f /tmp/packages_start.txt ']' 22:41:20 + '[' -f /tmp/packages_end.txt ']' 22:41:20 + diff /tmp/packages_start.txt /tmp/packages_end.txt 22:41:20 + '[' /w/workspace/sdc-sdc-distribution-client-maven-clm-master ']' 22:41:20 + mkdir -p /w/workspace/sdc-sdc-distribution-client-maven-clm-master/archives/ 22:41:20 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-maven-clm-master/archives/ 22:41:20 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins7799230481809589678.sh 22:41:20 ---> capture-instance-metadata.sh 22:41:20 Setup pyenv: 22:41:20 system 22:41:20 3.8.13 22:41:20 3.9.13 22:41:20 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-maven-clm-master/.python-version) 22:41:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TIdl from file:/tmp/.os_lf_venv 22:41:21 lf-activate-venv(): INFO: Installing: lftools 22:41:29 lf-activate-venv(): INFO: Adding /tmp/venv-TIdl/bin to PATH 22:41:29 INFO: Running in OpenStack, capturing instance metadata 22:41:29 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins2734980954391099266.sh 22:41:29 provisioning config files... 22:41:29 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-maven-clm-master@tmp/config14333370233986817890tmp 22:41:29 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 22:41:29 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 22:41:29 [EnvInject] - Injecting environment variables from a build step. 22:41:29 [EnvInject] - Injecting as environment variables the properties content 22:41:29 SERVER_ID=logs 22:41:29 22:41:29 [EnvInject] - Variables injected successfully. 22:41:29 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins2863459636142506585.sh 22:41:29 ---> create-netrc.sh 22:41:29 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins9553269165082308923.sh 22:41:29 ---> python-tools-install.sh 22:41:29 Setup pyenv: 22:41:29 system 22:41:29 3.8.13 22:41:29 3.9.13 22:41:29 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-maven-clm-master/.python-version) 22:41:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TIdl from file:/tmp/.os_lf_venv 22:41:31 lf-activate-venv(): INFO: Installing: lftools 22:41:38 lf-activate-venv(): INFO: Adding /tmp/venv-TIdl/bin to PATH 22:41:38 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins17644798182617139873.sh 22:41:38 ---> sudo-logs.sh 22:41:38 Archiving 'sudo' log.. 22:41:38 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash /tmp/jenkins8532561572530448462.sh 22:41:38 ---> job-cost.sh 22:41:38 Setup pyenv: 22:41:38 system 22:41:38 3.8.13 22:41:38 3.9.13 22:41:38 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-maven-clm-master/.python-version) 22:41:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TIdl from file:/tmp/.os_lf_venv 22:41:39 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 22:41:43 lf-activate-venv(): INFO: Adding /tmp/venv-TIdl/bin to PATH 22:41:43 INFO: No Stack... 22:41:43 INFO: Retrieving Pricing Info for: v3-standard-4 22:41:44 INFO: Archiving Costs 22:41:44 [sdc-sdc-distribution-client-maven-clm-master] $ /bin/bash -l /tmp/jenkins7665101770396246291.sh 22:41:44 ---> logs-deploy.sh 22:41:44 Setup pyenv: 22:41:44 system 22:41:44 3.8.13 22:41:44 3.9.13 22:41:44 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-maven-clm-master/.python-version) 22:41:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TIdl from file:/tmp/.os_lf_venv 22:41:45 lf-activate-venv(): INFO: Installing: lftools 22:41:52 lf-activate-venv(): INFO: Adding /tmp/venv-TIdl/bin to PATH 22:41:52 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-maven-clm-master/331 22:41:52 INFO: archiving workspace using pattern(s): -p **/*.log -p **/hs_err_*.log -p **/target/**/feature.xml -p **/target/failsafe-reports/failsafe-summary.xml -p **/target/surefire-reports/*-output.txt 22:41:53 Archives upload complete. 22:41:54 INFO: archiving logs to Nexus 22:41:55 ---> uname -a: 22:41:55 Linux prd-ubuntu1804-builder-4c-4g-82701 4.15.0-194-generic #205-Ubuntu SMP Fri Sep 16 19:49:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 22:41:55 22:41:55 22:41:55 ---> lscpu: 22:41:55 Architecture: x86_64 22:41:55 CPU op-mode(s): 32-bit, 64-bit 22:41:55 Byte Order: Little Endian 22:41:55 CPU(s): 4 22:41:55 On-line CPU(s) list: 0-3 22:41:55 Thread(s) per core: 1 22:41:55 Core(s) per socket: 1 22:41:55 Socket(s): 4 22:41:55 NUMA node(s): 1 22:41:55 Vendor ID: AuthenticAMD 22:41:55 CPU family: 23 22:41:55 Model: 49 22:41:55 Model name: AMD EPYC-Rome Processor 22:41:55 Stepping: 0 22:41:55 CPU MHz: 2800.000 22:41:55 BogoMIPS: 5600.00 22:41:55 Virtualization: AMD-V 22:41:55 Hypervisor vendor: KVM 22:41:55 Virtualization type: full 22:41:55 L1d cache: 32K 22:41:55 L1i cache: 32K 22:41:55 L2 cache: 512K 22:41:55 L3 cache: 16384K 22:41:55 NUMA node0 CPU(s): 0-3 22:41:55 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 22:41:55 22:41:55 22:41:55 ---> nproc: 22:41:55 4 22:41:55 22:41:55 22:41:55 ---> df -h: 22:41:55 Filesystem Size Used Avail Use% Mounted on 22:41:55 udev 7.9G 0 7.9G 0% /dev 22:41:55 tmpfs 1.6G 672K 1.6G 1% /run 22:41:55 /dev/vda1 78G 8.4G 70G 11% / 22:41:55 tmpfs 7.9G 0 7.9G 0% /dev/shm 22:41:55 tmpfs 5.0M 0 5.0M 0% /run/lock 22:41:55 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 22:41:55 /dev/vda15 105M 4.4M 100M 5% /boot/efi 22:41:55 tmpfs 1.6G 0 1.6G 0% /run/user/1001 22:41:55 22:41:55 22:41:55 ---> free -m: 22:41:55 total used free shared buff/cache available 22:41:55 Mem: 16040 736 13135 0 2168 14992 22:41:55 Swap: 1023 0 1023 22:41:55 22:41:55 22:41:55 ---> ip addr: 22:41:55 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 22:41:55 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 22:41:55 inet 127.0.0.1/8 scope host lo 22:41:55 valid_lft forever preferred_lft forever 22:41:55 inet6 ::1/128 scope host 22:41:55 valid_lft forever preferred_lft forever 22:41:55 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 22:41:55 link/ether fa:16:3e:7b:fa:02 brd ff:ff:ff:ff:ff:ff 22:41:55 inet 10.30.106.191/23 brd 10.30.107.255 scope global dynamic ens3 22:41:55 valid_lft 86189sec preferred_lft 86189sec 22:41:55 inet6 fe80::f816:3eff:fe7b:fa02/64 scope link 22:41:55 valid_lft forever preferred_lft forever 22:41:55 22:41:55 22:41:55 ---> sar -b -r -n DEV: 22:41:55 Linux 4.15.0-194-generic (prd-ubuntu1804-builder-4c-4g-82701) 11/09/24 _x86_64_ (4 CPU) 22:41:55 22:41:55 22:38:26 LINUX RESTART (4 CPU) 22:41:55 22:41:55 22:39:01 tps rtps wtps bread/s bwrtn/s 22:41:55 22:40:01 165.77 26.93 138.84 1276.48 26267.42 22:41:55 22:41:01 75.52 7.75 67.77 612.03 19929.88 22:41:55 Average: 120.66 17.34 103.32 944.34 23099.44 22:41:55 22:41:55 22:39:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 22:41:55 22:40:01 13415840 15167420 3009128 18.32 72188 1871904 1257396 7.20 1094460 1683588 90792 22:41:55 22:41:01 13619420 15511112 2805548 17.08 76736 2004200 765776 4.38 802980 1766732 12732 22:41:55 Average: 13517630 15339266 2907338 17.70 74462 1938052 1011586 5.79 948720 1725160 51762 22:41:55 22:41:55 22:39:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 22:41:55 22:40:01 ens3 199.32 158.91 2282.11 36.62 0.00 0.00 0.00 0.00 22:41:55 22:40:01 lo 0.87 0.87 0.09 0.09 0.00 0.00 0.00 0.00 22:41:55 22:41:01 ens3 1160.49 806.70 1986.98 272.40 0.00 0.00 0.00 0.00 22:41:55 22:41:01 lo 18.70 18.70 2.38 2.38 0.00 0.00 0.00 0.00 22:41:55 Average: ens3 679.78 482.72 2134.58 154.48 0.00 0.00 0.00 0.00 22:41:55 Average: lo 9.78 9.78 1.23 1.23 0.00 0.00 0.00 0.00 22:41:55 22:41:55 22:41:55 ---> sar -P ALL: 22:41:55 Linux 4.15.0-194-generic (prd-ubuntu1804-builder-4c-4g-82701) 11/09/24 _x86_64_ (4 CPU) 22:41:55 22:41:55 22:38:26 LINUX RESTART (4 CPU) 22:41:55 22:41:55 22:39:01 CPU %user %nice %system %iowait %steal %idle 22:41:55 22:40:01 all 25.68 0.00 1.81 4.49 0.07 67.94 22:41:55 22:40:01 0 30.79 0.00 2.07 1.80 0.07 65.27 22:41:55 22:40:01 1 38.08 0.00 2.59 5.56 0.10 53.67 22:41:55 22:40:01 2 18.03 0.00 0.99 0.72 0.08 80.18 22:41:55 22:40:01 3 15.77 0.00 1.55 9.92 0.05 72.70 22:41:55 22:41:01 all 33.44 0.00 2.82 1.37 0.08 62.29 22:41:55 22:41:01 0 33.03 0.00 2.77 0.59 0.08 63.54 22:41:55 22:41:01 1 32.02 0.00 2.58 0.92 0.08 64.39 22:41:55 22:41:01 2 33.40 0.00 3.02 0.20 0.07 63.31 22:41:55 22:41:01 3 35.30 0.00 2.93 3.77 0.10 57.90 22:41:55 Average: all 29.56 0.00 2.31 2.93 0.08 65.12 22:41:55 Average: 0 31.90 0.00 2.42 1.20 0.08 64.40 22:41:55 Average: 1 35.05 0.00 2.59 3.24 0.09 59.02 22:41:55 Average: 2 25.70 0.00 2.00 0.46 0.08 71.76 22:41:55 Average: 3 25.54 0.00 2.24 6.84 0.08 65.30 22:41:55 22:41:55 22:41:55