By type
[38;5;6m [38;5;5m10:49:39.02 [0m [38;5;6m [38;5;5m10:49:39.03 [0m[1mWelcome to the Bitnami elasticsearch container[0m [38;5;6m [38;5;5m10:49:39.11 [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch[0m [38;5;6m [38;5;5m10:49:39.11 [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch/issues[0m [38;5;6m [38;5;5m10:49:39.12 [0m [38;5;6m [38;5;5m10:49:39.22 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch setup ** [38;5;6m [38;5;5m10:49:39.52 [0m[38;5;2mINFO [0m ==> Configuring/Initializing Elasticsearch... [38;5;6m [38;5;5m10:49:39.82 [0m[38;5;2mINFO [0m ==> Setting default configuration [38;5;6m [38;5;5m10:49:39.92 [0m[38;5;2mINFO [0m ==> Configuring Elasticsearch cluster settings... [38;5;6m [38;5;5m10:49:40.12 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-1: fd00:100::9395 10.242.147.149, will use fd00:100::9395 [38;5;6m [38;5;5m10:49:40.32 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-1: fd00:100::9395 10.242.147.149, will use fd00:100::9395 OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [38;5;6m [38;5;5m10:49:58.42 [0m[38;5;2mINFO [0m ==> ** Elasticsearch setup finished! ** [38;5;6m [38;5;5m10:49:58.63 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-04-24T10:50:35,911][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-04-24T10:50:36,009][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] JVM home [/opt/bitnami/java] [2021-04-24T10:50:36,009][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-16951282878563517478, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-04-24T10:50:52,018][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [aggs-matrix-stats] [2021-04-24T10:50:52,019][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [analysis-common] [2021-04-24T10:50:52,020][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [geo] [2021-04-24T10:50:52,109][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [ingest-common] [2021-04-24T10:50:52,110][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [ingest-geoip] [2021-04-24T10:50:52,110][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [ingest-user-agent] [2021-04-24T10:50:52,110][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [kibana] [2021-04-24T10:50:52,111][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [lang-expression] [2021-04-24T10:50:52,111][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [lang-mustache] [2021-04-24T10:50:52,112][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [lang-painless] [2021-04-24T10:50:52,112][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [mapper-extras] [2021-04-24T10:50:52,112][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [parent-join] [2021-04-24T10:50:52,113][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [percolator] [2021-04-24T10:50:52,113][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [rank-eval] [2021-04-24T10:50:52,113][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [reindex] [2021-04-24T10:50:52,114][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [repository-url] [2021-04-24T10:50:52,114][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [tasks] [2021-04-24T10:50:52,114][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded module [transport-netty4] [2021-04-24T10:50:52,115][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-1] loaded plugin [repository-s3] [2021-04-24T10:50:52,813][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-1] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.206:/dockerdata-nfs/dev/elastic-master-2)]], net usable_space [178.8gb], net total_space [195.8gb], types [nfs4] [2021-04-24T10:50:52,813][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-1] heap size [123.7mb], compressed ordinary object pointers [true] [2021-04-24T10:50:53,522][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] node name [dev-sdnrdb-master-1], node ID [ZR23ov0CTQ6QSKtcpoMuqw], cluster name [sdnrdb-cluster] [2021-04-24T10:51:34,218][INFO ][o.e.t.NettyAllocator ] [dev-sdnrdb-master-1] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-04-24T10:51:35,113][INFO ][o.e.d.DiscoveryModule ] [dev-sdnrdb-master-1] using discovery type [zen] and seed hosts providers [settings] [2021-04-24T10:51:38,511][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-04-24T10:51:40,714][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] initialized [2021-04-24T10:51:40,715][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] starting ... [2021-04-24T10:51:41,926][INFO ][o.e.t.TransportService ] [dev-sdnrdb-master-1] publish_address {[fd00:100::9395]:9300}, bound_addresses {[::]:9300} [2021-04-24T10:51:42,915][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.147.149:9300, remoteAddress=/10.242.179.107:57046}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T10:51:43,214][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.147.149:9300, remoteAddress=/10.242.179.105:53172}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T10:51:43,910][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.147.149:9300, remoteAddress=/10.242.179.107:57102}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T10:51:44,010][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.147.149:9300, remoteAddress=/10.242.179.105:53232}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T10:51:44,711][INFO ][o.e.b.BootstrapChecks ] [dev-sdnrdb-master-1] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-04-24T10:51:46,314][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-1] setting initial configuration to VotingConfiguration{TN3LdveVSlSqrL6ZpHDx-w,ZR23ov0CTQ6QSKtcpoMuqw,{bootstrap-placeholder}-dev-sdnrdb-master-2} [2021-04-24T10:51:48,613][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::b369]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 1 does not match current term 2 at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T10:51:49,747][INFO ][o.e.c.c.CoordinationState] [dev-sdnrdb-master-1] cluster UUID set to [WZWNa8koRC2yLZlAEn86cA] [2021-04-24T10:51:50,108][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] master node changed {previous [], current [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}]}, added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 2, version: 1, reason: ApplyCommitRequest{term=2, version=1, sourceNode={dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T10:51:50,127][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-1] publish_address {[fd00:100::9395]:9200}, bound_addresses {[::]:9200} [2021-04-24T10:51:50,128][INFO ][o.e.n.Node ] [dev-sdnrdb-master-1] started [2021-04-24T10:51:51,250][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r}}, term: 2, version: 2, reason: ApplyCommitRequest{term=2, version=2, sourceNode={dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T10:51:53,410][INFO ][o.e.c.s.ClusterSettings ] [dev-sdnrdb-master-1] updating [action.auto_create_index] from [true] to [false] [2021-04-24T10:52:04,825][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-1] [gc][23] overhead, spent [413ms] collecting in the last [1.1s] [2021-04-24T10:52:45,163][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{RfV-iRLdSN2rfqAANB6QxA}{FkXrm5M6SBuZ8BvB9II77Q}{fd00:100:0:0:0:0:0:ed54}{[fd00:100::ed54]:9300}{dmr}}, term: 2, version: 46, reason: ApplyCommitRequest{term=2, version=46, sourceNode={dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:03:41,352][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [11809ms] ago, timed out [1802ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [1884] [2021-04-24T11:06:20,248][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12814ms] ago, timed out [2806ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [2211] [2021-04-24T11:12:37,696][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [26425ms] ago, timed out [16416ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [2988] [2021-04-24T11:12:37,709][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15413ms] ago, timed out [5404ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3001] [2021-04-24T11:13:09,603][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13450ms] ago, timed out [3602ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3059] [2021-04-24T11:13:38,402][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13410ms] ago, timed out [3402ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3099] [2021-04-24T11:14:52,694][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-1] master node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}] failed, restarting discovery org.elasticsearch.ElasticsearchException: node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}] failed [3] consecutive checks at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:293) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1073) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-0][[fd00:100::b369]:9300][internal:coordination/fault_detection/leader_check] request_id [3243] timed out after [10026ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) ~[elasticsearch-7.9.3.jar:7.9.3] ... 4 more [2021-04-24T11:14:52,912][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] master node changed {previous [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], current []}, term: 2, version: 115, reason: becoming candidate: onLeaderFailure [2021-04-24T11:14:54,107][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=2, lastAcceptedVersion=115, sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, targetNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-1][[fd00:100::9395]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T11:14:54,809][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{RfV-iRLdSN2rfqAANB6QxA}{FkXrm5M6SBuZ8BvB9II77Q}{fd00:100:0:0:0:0:0:ed54}{[fd00:100::ed54]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 5, version: 116, delta: master node changed {previous [], current [{dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}]} [2021-04-24T11:14:55,803][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-2}{RfV-iRLdSN2rfqAANB6QxA}{FkXrm5M6SBuZ8BvB9II77Q}{fd00:100:0:0:0:0:0:ed54}{[fd00:100::ed54]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=2, lastAcceptedVersion=115, sourceNode={dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}, targetNode={dev-sdnrdb-master-2}{RfV-iRLdSN2rfqAANB6QxA}{FkXrm5M6SBuZ8BvB9II77Q}{fd00:100:0:0:0:0:0:ed54}{[fd00:100::ed54]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-2][[fd00:100::ed54]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-24T11:15:04,824][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [116] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-24T11:15:07,645][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [46949ms] ago, timed out [37136ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3215] [2021-04-24T11:15:07,648][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [36135ms] ago, timed out [26125ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3232] [2021-04-24T11:15:07,649][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [25124ms] ago, timed out [15098ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [3243] [2021-04-24T11:15:17,929][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [23035ms] ago, timed out [13022ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r}], id [3286] [2021-04-24T11:15:17,930][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12022ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r}], id [3320] [2021-04-24T11:15:24,824][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] master node changed {previous [], current [{dev-sdnrdb-master-1}{ZR23ov0CTQ6QSKtcpoMuqw}{3Ib1W6ZhTxC0RcWCIiNDuQ}{fd00:100:0:0:0:0:0:9395}{[fd00:100::9395]:9300}{dmr}]}, term: 5, version: 116, reason: Publication{term=5, version=116} [2021-04-24T11:15:24,836][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [116] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-24T11:15:27,116][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} reason: followers check retry count exceeded], term: 5, version: 117, delta: removed {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:15:29,363][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 5, version: 117, reason: Publication{term=5, version=117} [2021-04-24T11:15:29,516][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [maintenancemode-v5][0] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,609][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,621][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [historicalperformance15min-v5][1] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,709][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [networkelement-connection-v5][0] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,727][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,911][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-24T11:15:29,919][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [guicutthrough-v5][0] primary-replica resync completed with 0 operations [2021-04-24T11:15:30,018][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [56.8s] (37 delayed shards) [2021-04-24T11:15:30,110][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-04-24T11:15:30,212][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [connectionlog-v5][1] primary-replica resync completed with 0 operations [2021-04-24T11:15:30,312][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-1] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-24T11:16:27,870][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][3] marking unavailable shards as stale: [g1egs94wTXqgCNI8UH5_RQ] [2021-04-24T11:16:28,480][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][1] marking unavailable shards as stale: [70xuI2bmT9epo0LJC-s75g] [2021-04-24T11:16:28,485][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][4] marking unavailable shards as stale: [DFJdy1WZS_iwpjNNYinLPw] [2021-04-24T11:16:28,485][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][4] marking unavailable shards as stale: [dp8mSx0OS2acuSdUrTCzEA] [2021-04-24T11:16:31,429][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][3] marking unavailable shards as stale: [fE04t0tXQPC7hmRHUt8v2w] [2021-04-24T11:16:32,013][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][4] marking unavailable shards as stale: [bMfIR-dQSHKcpHr2QxboYA] [2021-04-24T11:16:32,014][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][4] marking unavailable shards as stale: [-5UGPYNzShebPv9fFpuKLg] [2021-04-24T11:16:32,015][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][1] marking unavailable shards as stale: [NUZCa-URQ4CrFr1Z84qKUA] [2021-04-24T11:16:35,110][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][3] marking unavailable shards as stale: [18sdMUHjSuafgmRxYmZfLw] [2021-04-24T11:16:36,112][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][1] marking unavailable shards as stale: [Bdy5gRGXRAmtEA_h57P6NQ] [2021-04-24T11:16:36,113][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][0] marking unavailable shards as stale: [pK349c_7TQe3OjcLeipy2Q] [2021-04-24T11:16:36,113][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][1] marking unavailable shards as stale: [pHYIqVbfRt6GDQbvcAQHew] [2021-04-24T11:16:42,577][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][0] marking unavailable shards as stale: [5MF5XGQCSROyOMv_6mG94g] [2021-04-24T11:16:43,042][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][2] marking unavailable shards as stale: [UpNsk_vfTD-tCQQieEfGbQ] [2021-04-24T11:16:43,042][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][1] marking unavailable shards as stale: [eY-aekGmRii-JgQn2lRbHg] [2021-04-24T11:16:43,046][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][0] marking unavailable shards as stale: [wzBUTVmxR8yFZThmy6rlvw] [2021-04-24T11:16:45,618][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][0] marking unavailable shards as stale: [0tguNbAVT2K2TS1feWjDqg] [2021-04-24T11:16:45,929][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][2] marking unavailable shards as stale: [oMMO7MewRe-wSGal5uN65w] [2021-04-24T11:16:45,929][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][3] marking unavailable shards as stale: [8OQeO1_qRXyv-pIJjHWEAg] [2021-04-24T11:16:49,145][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][1] marking unavailable shards as stale: [UWD0-4y-SLK9bBt1Df67aA] [2021-04-24T11:16:50,044][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][2] marking unavailable shards as stale: [UGiBUB3eQ6K5JaNtVcETTA] [2021-04-24T11:16:52,231][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][0] marking unavailable shards as stale: [oBnxYtrBRaa_rLBWTfzRPA] [2021-04-24T11:16:52,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][4] marking unavailable shards as stale: [z1eN2RmqTfms4zcy-6beGQ] [2021-04-24T11:16:54,009][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][1] marking unavailable shards as stale: [EzAGborSRLyDQXsjfDIIbw] [2021-04-24T11:16:54,410][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] marking unavailable shards as stale: [uNi80JioSZCv33BchAKLlw] [2021-04-24T11:16:55,412][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][0] marking unavailable shards as stale: [pmlP-W8dRXWt_WhT9TAh4g] [2021-04-24T11:16:55,413][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][2] marking unavailable shards as stale: [cxAiWjoAQf2Rl-fR2Bnp5w] [2021-04-24T11:16:56,412][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][1] marking unavailable shards as stale: [A5LWppSoTTG2eKzuWbDgww] [2021-04-24T11:16:58,764][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] marking unavailable shards as stale: [bOIGU2MPQ_mscfDW2Ibv5A] [2021-04-24T11:17:00,026][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][0] marking unavailable shards as stale: [L-FUiW8WSJeLqpQfmZkw1A] [2021-04-24T11:17:00,678][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][1] marking unavailable shards as stale: [XUi0uwTbSzyF-AVu51DMZg] [2021-04-24T11:17:00,679][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][2] marking unavailable shards as stale: [lG7zhzoAS-Sbp2P4m8ywIA] [2021-04-24T11:17:05,230][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][3] marking unavailable shards as stale: [08ZrfP7dTuyQj8pPHhl1rg] [2021-04-24T11:17:06,167][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][0] marking unavailable shards as stale: [bochQh57TeKdLOJcW1bdGg] [2021-04-24T11:17:06,168][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][1] marking unavailable shards as stale: [tb8aMNQSRpqVdoMye5JvvQ] [2021-04-24T11:17:06,169][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][4] marking unavailable shards as stale: [VlNH6WuVSKaeSG1LSnwveA] [2021-04-24T11:17:10,185][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][3] marking unavailable shards as stale: [Kjut25tsRTu9j9CuHm6yWg] [2021-04-24T11:17:10,727][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]). [2021-04-24T11:17:44,544][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-7cjnt}{YWJKjItXRPWQ7Kx7LfR-tg}{eR-HvTqMSBSuzGGJeGwKew}{fd00:100:0:0:0:0:0:1716}{[fd00:100::1716]:9300}{r} join existing leader], term: 5, version: 174, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-7cjnt}{YWJKjItXRPWQ7Kx7LfR-tg}{eR-HvTqMSBSuzGGJeGwKew}{fd00:100:0:0:0:0:0:1716}{[fd00:100::1716]:9300}{r}} [2021-04-24T11:17:46,030][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-7cjnt}{YWJKjItXRPWQ7Kx7LfR-tg}{eR-HvTqMSBSuzGGJeGwKew}{fd00:100:0:0:0:0:0:1716}{[fd00:100::1716]:9300}{r}}, term: 5, version: 174, reason: Publication{term=5, version=174} [2021-04-24T11:18:11,666][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} join existing leader], term: 5, version: 175, delta: added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:18:21,671][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [175] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-24T11:18:41,671][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 5, version: 175, reason: Publication{term=5, version=175} [2021-04-24T11:18:41,675][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [175] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-24T11:18:51,711][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [176] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-24T11:18:56,676][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-24T11:18:59,278][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [17483ms] ago, timed out [2601ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}], id [4855] [2021-04-24T11:19:11,751][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.9s] publication of cluster state version [176] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-24T11:19:11,760][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} reason: followers check retry count exceeded], term: 5, version: 177, delta: removed {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:19:12,100][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 5, version: 177, reason: Publication{term=5, version=177} [2021-04-24T11:19:19,553][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} join existing leader], term: 5, version: 178, delta: added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:19:29,556][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [178] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-24T11:19:49,556][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 5, version: 178, reason: Publication{term=5, version=178} [2021-04-24T11:19:49,558][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [178] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-24T11:19:59,612][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [179] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-24T11:20:10,966][WARN ][o.e.c.NodeConnectionsService] [dev-sdnrdb-master-1] failed to connect to {dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r} (tried [1] times) org.elasticsearch.transport.ConnectTransportException: [dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp][[fd00:100::b36b]:9300] connect_exception at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:966) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-7.9.3.jar:7.9.3] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:68) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[?:?] at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) ~[?:?] at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) ~[?:?] at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) ~[?:?] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: fd00:100:0:0:0:0:0:b36b/fd00:100:0:0:0:0:0:b36b:9300 Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?] at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?] ... 7 more [2021-04-24T11:20:12,120][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r} reason: disconnected, {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} reason: disconnected], term: 5, version: 180, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r},{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}} [2021-04-24T11:20:12,424][WARN ][o.e.c.NodeConnectionsService] [dev-sdnrdb-master-1] failed to connect to {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr} (tried [1] times) org.elasticsearch.transport.ConnectTransportException: [dev-sdnrdb-master-0][[fd00:100::b369]:9300] connect_exception at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:966) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-7.9.3.jar:7.9.3] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:68) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549) ~[?:?] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) ~[?:?] at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) ~[?:?] at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608) ~[?:?] at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) ~[?:?] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) ~[?:?] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: fd00:100:0:0:0:0:0:b369/fd00:100:0:0:0:0:0:b369:9300 Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?] at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[?:?] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[?:?] ... 7 more [2021-04-24T11:20:12,742][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-8vnhp}{nfJYk0toRO-0Jow8WkLGnQ}{CWrpFZZ1STOVqYmo2o7FdQ}{fd00:100:0:0:0:0:0:b36b}{[fd00:100::b36b]:9300}{r},{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{sEiT556FSiqKH-CSKnZkWA}{fd00:100:0:0:0:0:0:b369}{[fd00:100::b369]:9300}{dmr}}, term: 5, version: 180, reason: Publication{term=5, version=180} [2021-04-24T11:23:19,860][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{IecVUWTMT8Ktswom5hSZGQ}{fd00:100:0:0:0:0:0:b373}{[fd00:100::b373]:9300}{dmr} join existing leader], term: 5, version: 181, delta: added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{IecVUWTMT8Ktswom5hSZGQ}{fd00:100:0:0:0:0:0:b373}{[fd00:100::b373]:9300}{dmr}} [2021-04-24T11:23:22,853][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{IecVUWTMT8Ktswom5hSZGQ}{fd00:100:0:0:0:0:0:b373}{[fd00:100::b373]:9300}{dmr}}, term: 5, version: 181, reason: Publication{term=5, version=181} [2021-04-24T11:23:32,869][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [182] is still waiting for {dev-sdnrdb-master-0}{TN3LdveVSlSqrL6ZpHDx-w}{IecVUWTMT8Ktswom5hSZGQ}{fd00:100:0:0:0:0:0:b373}{[fd00:100::b373]:9300}{dmr} [SENT_APPLY_COMMIT]