10:49:38.98   10:49:38.99 Welcome to the Bitnami elasticsearch container  10:49:39.00 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch  10:49:39.08 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues  10:49:39.09   10:49:39.18 INFO  ==> ** Starting Elasticsearch setup **  10:49:39.50 INFO  ==> Configuring/Initializing Elasticsearch...  10:49:39.99 INFO  ==> Setting default configuration  10:49:40.09 INFO  ==> Configuring Elasticsearch cluster settings...  10:49:40.38 WARN  ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::3e94 10.242.62.148, will use fd00:100::3e94  10:49:40.59 WARN  ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::3e94 10.242.62.148, will use fd00:100::3e94 OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.  10:50:04.29 INFO  ==> ** Elasticsearch setup finished! **  10:50:04.58 INFO  ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-04-14T10:50:48,889][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-04-14T10:50:48,983][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM home [/opt/bitnami/java] [2021-04-14T10:50:49,080][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-7798555118035850314, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-04-14T10:51:09,281][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [aggs-matrix-stats] [2021-04-14T10:51:09,282][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [analysis-common] [2021-04-14T10:51:09,282][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [geo] [2021-04-14T10:51:09,283][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-common] [2021-04-14T10:51:09,283][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-geoip] [2021-04-14T10:51:09,283][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-user-agent] [2021-04-14T10:51:09,284][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [kibana] [2021-04-14T10:51:09,284][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-expression] [2021-04-14T10:51:09,285][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-mustache] [2021-04-14T10:51:09,285][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-painless] [2021-04-14T10:51:09,285][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [mapper-extras] [2021-04-14T10:51:09,286][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [parent-join] [2021-04-14T10:51:09,286][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [percolator] [2021-04-14T10:51:09,287][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [rank-eval] [2021-04-14T10:51:09,287][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [reindex] [2021-04-14T10:51:09,287][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [repository-url] [2021-04-14T10:51:09,288][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [tasks] [2021-04-14T10:51:09,288][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [transport-netty4] [2021-04-14T10:51:09,289][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded plugin [repository-s3] [2021-04-14T10:51:10,387][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.189:/dockerdata-nfs/dev/elastic-master-0)]], net usable_space [179.3gb], net total_space [195.8gb], types [nfs4] [2021-04-14T10:51:10,390][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true] [2021-04-14T10:51:11,195][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] node name [dev-sdnrdb-master-0], node ID [ZxsDM5oETU2XXRgTLIIDtA], cluster name [sdnrdb-cluster] [2021-04-14T10:52:03,891][INFO ][o.e.t.NettyAllocator ] [dev-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-04-14T10:52:04,589][INFO ][o.e.d.DiscoveryModule ] [dev-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings] [2021-04-14T10:52:09,593][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-04-14T10:52:12,386][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] initialized [2021-04-14T10:52:12,387][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] starting ... [2021-04-14T10:52:13,881][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][1] overhead, spent [306ms] collecting in the last [1s] [2021-04-14T10:52:15,184][INFO ][o.e.t.TransportService ] [dev-sdnrdb-master-0] publish_address {[fd00:100::3e94]:9300}, bound_addresses {[::]:9300} [2021-04-14T10:52:16,892][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.62.148:9300, remoteAddress=/10.242.228.176:38722}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-14T10:52:17,577][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.62.148:9300, remoteAddress=/10.242.228.176:38770}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-14T10:52:18,583][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.62.148:9300, remoteAddress=/10.242.228.176:38786}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-14T10:52:19,200][INFO ][o.e.b.BootstrapChecks ] [dev-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-04-14T10:52:24,392][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][11] overhead, spent [406ms] collecting in the last [1.3s] [2021-04-14T10:52:29,500][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]; discovery will continue using [10.242.62.148:9300, 10.242.198.158:9300, 10.242.5.44:9300, 10.242.228.176:9300] from hosts providers and [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-14T10:52:39,511][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]; discovery will continue using [10.242.62.148:9300, 10.242.198.158:9300, 10.242.5.44:9300, 10.242.228.176:9300] from hosts providers and [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-14T10:52:49,515][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]; discovery will continue using [10.242.62.148:9300, 10.242.198.158:9300, 10.242.5.44:9300, 10.242.228.176:9300] from hosts providers and [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-14T10:52:59,519][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]; discovery will continue using [10.242.62.148:9300, 10.242.198.158:9300, 10.242.5.44:9300, 10.242.228.176:9300] from hosts providers and [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-14T10:53:03,942][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] setting initial configuration to VotingConfiguration{GxlOGSMxRDyO9ww8Zdsfag,ZxsDM5oETU2XXRgTLIIDtA,{bootstrap-placeholder}-dev-sdnrdb-master-2} [2021-04-14T10:53:05,259][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-0] failed to join {dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::3e94]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-14T10:53:05,997][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 1, delta: master node changed {previous [], current [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T10:53:07,944][INFO ][o.e.c.c.CoordinationState] [dev-sdnrdb-master-0] cluster UUID set to [_aQox21SR7K48tcQl8RYbQ] [2021-04-14T10:53:08,946][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] master node changed {previous [], current [{dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1, reason: Publication{term=2, version=1} [2021-04-14T10:53:09,180][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-hp5gh}{Q833XYo9Tk2UYlX9_IUP5Q}{TVef-0IOSKazonUnR-Qohw}{fd00:100:0:0:0:0:0:e4b0}{[fd00:100::e4b0]:9300}{r} join existing leader, {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 2, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-hp5gh}{Q833XYo9Tk2UYlX9_IUP5Q}{TVef-0IOSKazonUnR-Qohw}{fd00:100:0:0:0:0:0:e4b0}{[fd00:100::e4b0]:9300}{r}} [2021-04-14T10:53:09,383][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-0] publish_address {[fd00:100::3e94]:9200}, bound_addresses {[::]:9200} [2021-04-14T10:53:09,383][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] started [2021-04-14T10:53:09,981][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-hp5gh}{Q833XYo9Tk2UYlX9_IUP5Q}{TVef-0IOSKazonUnR-Qohw}{fd00:100:0:0:0:0:0:e4b0}{[fd00:100::e4b0]:9300}{r}}, term: 2, version: 2, reason: Publication{term=2, version=2} [2021-04-14T10:53:10,399][INFO ][o.e.g.GatewayService ] [dev-sdnrdb-master-0] recovered [0] indices into cluster_state [2021-04-14T10:53:14,795][INFO ][o.e.c.s.ClusterSettings ] [dev-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false] [2021-04-14T10:53:18,001][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [networkelement-connection-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:53:30,888][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [eventlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:53:40,685][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:53:45,791][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [maintenancemode-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:53:52,584][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [connectionlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:53:58,087][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [guicutthrough-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:03,788][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance15min-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:09,186][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance24h-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:14,087][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [mediator-server-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:19,094][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultcurrent-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:23,689][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [inventoryequipment-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-14T10:54:29,984][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr} join existing leader], term: 2, version: 75, delta: added {{dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr}} [2021-04-14T10:54:31,866][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr}}, term: 2, version: 75, reason: Publication{term=2, version=75} [2021-04-14T10:54:52,058][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][4]]]). [2021-04-14T11:01:41,931][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21884ms] ago, timed out [12014ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [3742] [2021-04-14T11:01:41,934][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11014ms] ago, timed out [1001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [3787] [2021-04-14T11:01:52,486][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 135, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:01:54,860][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 135, reason: Publication{term=2, version=135} [2021-04-14T11:01:55,187][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,197][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,285][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,306][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,489][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,680][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][1] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,682][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,880][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][1] primary-replica resync completed with 0 operations [2021-04-14T11:01:55,887][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:56,084][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [55.9s] (37 delayed shards) [2021-04-14T11:01:56,186][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][0] primary-replica resync completed with 0 operations [2021-04-14T11:01:56,587][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][1] primary-replica resync completed with 0 operations [2021-04-14T11:02:53,705][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][0] marking unavailable shards as stale: [4HzDXYK8QJW-22Vvdj_BZA] [2021-04-14T11:02:53,707][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [1JQoH5FlQi6Mrn5wnZWmJg] [2021-04-14T11:02:54,562][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [0fcFo3dKQna9N0cNNiyzcw] [2021-04-14T11:02:54,563][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [7Y9_LtJfT2y_aaGB_PDW8Q] [2021-04-14T11:02:57,995][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][0] marking unavailable shards as stale: [imTuEBI0QgW6zq2FJBP9mQ] [2021-04-14T11:02:58,759][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [CTL-4cRRSrWnS8CsF-sPEA] [2021-04-14T11:02:58,760][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [BBMWPBX1TEOk_ZtShjGPdA] [2021-04-14T11:02:58,761][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [sErKTKzbTVmPCr07hlaxfw] [2021-04-14T11:03:02,107][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [9bOSFKkQQrmuYzuKV6K0xg] [2021-04-14T11:03:02,981][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][0] marking unavailable shards as stale: [uAyrTUyoQWy7ux8CaQ6csw] [2021-04-14T11:03:02,981][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][0] marking unavailable shards as stale: [A35dRkumQWenA0BA76zHiA] [2021-04-14T11:03:02,982][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [rrk5z5CKRYeNbWscX-iRSQ] [2021-04-14T11:03:06,914][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [i6HslJkPQ1iK-R8DcBIz6w] [2021-04-14T11:03:07,108][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [OQlYx03LQse4rkPbFgkSiA] [2021-04-14T11:03:09,262][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [fqKvTxBzReGiK_9cIWvkUg] [2021-04-14T11:03:09,263][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][0] marking unavailable shards as stale: [be5VIRYZSCCH9Z1xl1s0-w] [2021-04-14T11:03:10,368][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [nHR8nP97QJydTePPennULg] [2021-04-14T11:03:12,183][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [KpIuitXPT5eJuCgK_F-Uog] [2021-04-14T11:03:13,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [isJYkPqKS9-K9dqVrhRRmw] [2021-04-14T11:03:15,585][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][0] marking unavailable shards as stale: [dH6kgeLATgudRw69cCBP5A] [2021-04-14T11:03:15,586][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [bKkmMfa2QNantxtE-17n2w] [2021-04-14T11:03:17,178][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [BD3IfVG6QSyW5cxpYPEUKA] [2021-04-14T11:03:17,659][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [gFTRerttR1SfU3ruDITIxQ] [2021-04-14T11:03:18,515][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][1] marking unavailable shards as stale: [yiK6OHdYQXWA3uCwQRFb3Q] [2021-04-14T11:03:18,516][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][0] marking unavailable shards as stale: [JJvici1SSJyIK7gESaV-_A] [2021-04-14T11:03:19,259][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [2aE5_wIIQDyzkl7mCZYF3A] [2021-04-14T11:03:20,684][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [ArIa-kVcQrCvDiqm8z5dbg] [2021-04-14T11:03:21,321][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][0] marking unavailable shards as stale: [GOMDlc63Rq2V5sdsRbuYCQ] [2021-04-14T11:03:22,481][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [ynmCo9QaTN6KYHx6szpNKw] [2021-04-14T11:03:22,482][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][1] marking unavailable shards as stale: [4_sotrECT0ix_qvDJrSXcg] [2021-04-14T11:03:25,956][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [qmEJispxSEmlzaS5MkRGYA] [2021-04-14T11:03:26,806][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][0] marking unavailable shards as stale: [24_jquyGRtCvWz8OTeS0dQ] [2021-04-14T11:03:26,881][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][0] marking unavailable shards as stale: [BPp-sh50QaKh4NscInNagQ] [2021-04-14T11:03:29,181][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [1foSxgQOR_SLpfl-r506xQ] [2021-04-14T11:03:31,039][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [bnd_jQf2SJqnoX6vdLIIMQ] [2021-04-14T11:03:31,040][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] marking unavailable shards as stale: [l5PtnZQSQWqnrMSkdtoc6A] [2021-04-14T11:03:32,957][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [1nN4DH3yRSKnXnCRbizLOw] [2021-04-14T11:03:34,386][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][1]]]). [2021-04-14T11:04:38,629][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 195, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:04:48,637][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [195] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:04:50,628][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11008ms] ago, timed out [1001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [5248] [2021-04-14T11:05:00,047][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 195, reason: Publication{term=2, version=195} [2021-04-14T11:05:10,060][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [196] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:07:39,271][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22422ms] ago, timed out [12411ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [6707] [2021-04-14T11:07:39,275][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11410ms] ago, timed out [1402ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [6748] [2021-04-14T11:09:31,181][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 258, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:09:33,511][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:09:34,759][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 258, reason: Publication{term=2, version=258} [2021-04-14T11:09:34,908][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:09:34,916][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,105][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,187][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,217][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,386][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,387][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,387][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,483][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,488][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [55.5s] (36 delayed shards) [2021-04-14T11:09:35,585][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,680][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:09:35,684][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:10:31,781][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [taSGNi33QaqI1010wPU9kQ] [2021-04-14T11:10:32,897][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [vaKKq5Q9TxSFclO3rqbOrA] [2021-04-14T11:10:32,898][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [TtXXlfo0Tzmc95iWE-Ra0Q] [2021-04-14T11:10:32,898][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [bsqTf5JRSRaDeBtrk-KM1Q] [2021-04-14T11:10:35,204][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [12WbEhwmRciTLj_N-CMRYA] [2021-04-14T11:10:35,420][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [X7OjIHw2Su21UM6CIn8HZg] [2021-04-14T11:10:36,387][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [Rk1Et-s6RGaFs-kYJfP7iA] [2021-04-14T11:10:36,389][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [mpRIR0WQQ86JLqzRf-mqZQ] [2021-04-14T11:10:38,297][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [IM4iLKcgQlyO8N4gcNUnbw] [2021-04-14T11:10:38,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [HbXI7qOqRKS2f6lhN73ETA] [2021-04-14T11:10:38,897][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [_taGCa9zQBGzOi071U300Q] [2021-04-14T11:10:40,387][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [jK72loYmTzu9HUNWAuUL9A] [2021-04-14T11:10:40,913][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [RgomEI_ST8aWWXMx8_WdbQ] [2021-04-14T11:10:41,858][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [oGHB_g75R-edgQkM4_nctg] [2021-04-14T11:10:41,881][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [R-gDQhD4SmmRzqIjm4EapQ] [2021-04-14T11:10:42,386][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [Trjo6PMBTJWZYv8yMkgPAw] [2021-04-14T11:10:43,405][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [cLIv3soxSi-RCdKi2izV3w] [2021-04-14T11:10:44,202][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [3abdNIZPT9iS5PMmojcdfQ] [2021-04-14T11:10:44,203][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [bfRTCaiSToiWddXdKQvNcg] [2021-04-14T11:10:45,629][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [hQxjTIQUTHGz_UQ_r5VZ7w] [2021-04-14T11:10:46,022][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [YijNWSu2QGGzqZ2iR4Sp6A] [2021-04-14T11:10:47,359][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [ObDbhO8gRe-MrSasyrOfwg] [2021-04-14T11:10:47,360][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [Zekfu60LQxKmAIAVzYXxDw] [2021-04-14T11:10:47,715][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [pHlTuNgHRFKcjHkPz3Fyag] [2021-04-14T11:10:49,110][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [voK4PE4jSsiRT3D7jB6UHw] [2021-04-14T11:10:49,597][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [hRebfDdkS-qbqWsNIL-oeQ] [2021-04-14T11:10:49,958][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [3LRmmkmDSLKhNh-3AIX45A] [2021-04-14T11:10:49,959][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [SwXiVPv1TYG3wjc3qsakAg] [2021-04-14T11:10:52,757][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [C7AWKI1uSeaM0c31nKr_9A] [2021-04-14T11:10:52,758][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [UC0Q7xsHQa686rLr0GAt6g] [2021-04-14T11:10:54,572][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [9pUoUFo5T46eQuBI-KGnrQ] [2021-04-14T11:10:55,983][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [IXppsVS_SHuBlt8VFAz2IA] [2021-04-14T11:10:56,881][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [IE6ViSLFSXKhDben-DkW0A] [2021-04-14T11:10:59,382][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [jPeOJ8SvQgm8UPzsFeiYog] [2021-04-14T11:10:59,382][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [3oXmDqgPRXWnPLhq6BJyoA] [2021-04-14T11:11:00,516][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [UuBM8K48RcG3iJxjdH8DgA] [2021-04-14T11:11:01,582][WARN ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][1125] overhead, spent [613ms] collecting in the last [1.2s] [2021-04-14T11:11:01,911][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][4]]]). [2021-04-14T11:12:38,455][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 313, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:12:48,461][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [313] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:13:08,468][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 313, reason: Publication{term=2, version=313} [2021-04-14T11:13:08,485][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [313] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:13:18,501][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [314] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:13:38,528][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [314] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:13:38,580][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 315, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:13:38,756][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 315, reason: Publication{term=2, version=315} [2021-04-14T11:13:38,806][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}] org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-1][[fd00:100::52c]:9300][internal:cluster/coordination/join/validate] disconnected [2021-04-14T11:15:17,932][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 316, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:15:19,631][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 316, reason: Publication{term=2, version=316} [2021-04-14T11:16:02,692][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [329] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:16:22,719][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [329] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:16:29,872][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:17:14,880][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:17:20,541][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20673ms] ago, timed out [5611ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [10220] [2021-04-14T11:19:15,918][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [373] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:22:21,931][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [31250ms] ago, timed out [21439ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [12204] [2021-04-14T11:22:21,933][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20436ms] ago, timed out [10424ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [12249] [2021-04-14T11:22:22,437][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17830ms] ago, timed out [2802ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [12262] [2021-04-14T11:23:07,430][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:23:11,975][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19621ms] ago, timed out [4603ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [12467] [2021-04-14T11:25:04,787][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 379, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:25:06,962][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 379, reason: Publication{term=2, version=379} [2021-04-14T11:25:07,084][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,086][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,090][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,200][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,296][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,392][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,583][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57s] (37 delayed shards) [2021-04-14T11:25:07,687][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,693][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:25:07,781][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] primary-replica resync completed with 0 operations [2021-04-14T11:26:04,384][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 380, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:26:14,393][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [380] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:26:34,397][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 380, reason: Publication{term=2, version=380} [2021-04-14T11:26:34,402][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [380] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:26:44,409][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [381] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:27:01,630][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [24044ms] ago, timed out [9014ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [13600] [2021-04-14T11:27:01,635][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27052ms] ago, timed out [12225ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [13541] [2021-04-14T11:27:04,412][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [381] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:27:04,421][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 382, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:27:04,589][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 382, reason: Publication{term=2, version=382} [2021-04-14T11:27:05,100][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [AP31kzVxT1mDQ4FofcN0Nw] [2021-04-14T11:27:05,485][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [jme0xx2IQoC2vBbHXdl4_w] [2021-04-14T11:27:05,485][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [_VfS9-TbRUWU3S2NgGMeRg] [2021-04-14T11:27:05,486][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [aRO6qyU1R-27K3ea3CFFYA] [2021-04-14T11:27:08,212][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [KRYHBHL6TH-arP9LUKUd8A] [2021-04-14T11:27:08,492][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [xWxdxDN0QXuLsN7UAOX2qQ] [2021-04-14T11:27:09,087][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [5RYnrRkqR_yWQRk6fgPn0Q] [2021-04-14T11:27:09,087][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [LNKpGsDpRwiqKEXhcPB7BQ] [2021-04-14T11:27:10,352][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [xnyRD6Y1QDWGJzmFGf_lKw] [2021-04-14T11:27:10,500][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [0s0iIe69TdKygar732YcEw] [2021-04-14T11:27:10,981][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [x2cOUPRpSamATWuaMs-eVQ] [2021-04-14T11:27:11,815][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [XTgEZ0IYR26QTZ2gyBY_BQ] [2021-04-14T11:27:12,101][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [lfG2N5xfRXKxfYhixQCM0g] [2021-04-14T11:27:13,543][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 404, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:27:23,546][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [404] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:27:43,546][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 404, reason: Publication{term=2, version=404} [2021-04-14T11:27:43,549][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [404] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:27:43,550][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [Aa5QUxptSF6O1acHIIbgwg] [2021-04-14T11:27:43,551][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [DVCafLb0TuWMARzsHt1u2Q] [2021-04-14T11:27:53,557][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [405] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:27:56,296][WARN ][o.e.i.c.IndicesClusterStateService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking and sending shard failed due to [failed recovery] org.elasticsearch.indices.recovery.RecoveryFailedException: [historicalperformance15min-v5][1]: Recovery failed from {dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr} into {dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-2][[fd00:100::c69e]:9300][internal:index/shard/recovery/start_recovery] Caused by: java.lang.IllegalStateException: no local checkpoint tracking information available at org.elasticsearch.index.seqno.ReplicationTracker.initiateTracking(ReplicationTracker.java:1158) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.initiateTracking(IndexShard.java:2299) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$13(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$14(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$30(RecoverySourceHandler.java:648) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-14T11:28:18,973][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][3]]]). [2021-04-14T11:30:46,430][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15215ms] ago, timed out [5208ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [15576] [2021-04-14T11:33:56,302][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [25262ms] ago, timed out [15253ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [16477] [2021-04-14T11:33:58,449][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16453ms] ago, timed out [6440ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [16522] [2021-04-14T11:36:16,803][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 467, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:36:18,053][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 467, reason: Publication{term=2, version=467} [2021-04-14T11:36:18,111][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-04-14T11:36:18,116][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [58.6s] (37 delayed shards) [2021-04-14T11:36:18,281][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:36:18,393][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:37:17,311][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [RJR2K5pQRf6iahLUSelFFg] [2021-04-14T11:37:17,893][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [QcfrQuvoR-uNIpyA9TSrKQ] [2021-04-14T11:37:17,894][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [Dbv7N1s5TBmq4H_tVIX1bA] [2021-04-14T11:37:17,894][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [dBkvqYxkR8ar7q7Ff4DMVQ] [2021-04-14T11:37:19,854][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [sVCiJz0wTticaocpvHNEKg] [2021-04-14T11:37:20,557][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [_l-sBbPWTtGWVlC_8YOUMA] [2021-04-14T11:37:20,558][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [j8SSCwqQSlGxnUI7bOvSgw] [2021-04-14T11:37:20,559][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [U3NuT4hfQrWAQKRVoWwZfA] [2021-04-14T11:37:23,086][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [Rzmems5yS52JUL43DFHrNA] [2021-04-14T11:37:23,554][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [sF41Bu4yTS-EUOAmglu5_A] [2021-04-14T11:37:23,555][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [JTV2NPYcRoygJqcijz-ylw] [2021-04-14T11:37:23,555][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [jzQvTtJXRny-HAENF-63Ig] [2021-04-14T11:37:25,582][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [7tjgp18sQ4K0xd4UcgPsSA] [2021-04-14T11:37:26,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [q42uiZc0TOm-It-g3So50A] [2021-04-14T11:37:26,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [8ydynYNpTPSGjbPZ0F-b0g] [2021-04-14T11:37:26,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [ZD1n03AvTGeKwoGSO6sIEA] [2021-04-14T11:37:28,384][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [am5OTYbtRrCjyHgkYxfcmw] [2021-04-14T11:37:28,386][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [19TBVmdiR0e6IZMexIDRKw] [2021-04-14T11:37:28,604][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [CXRvK2h_TUKYMeQYh2toCA] [2021-04-14T11:37:29,352][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [ADt5vpJtQ2C0OcPZOwA06w] [2021-04-14T11:37:29,793][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [w8Lr3GhVTCiw7gpfp4MVHg] [2021-04-14T11:37:30,533][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][1] marking unavailable shards as stale: [Zx_A3ppAR7qMzIp2mhy1DQ] [2021-04-14T11:37:30,534][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [1urqHjtcRXm_ST10DE_-sg] [2021-04-14T11:37:31,003][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [8QjNFbcVR_uR8QG-ZyKSug] [2021-04-14T11:37:32,071][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [QKAVXM0oSyaOYH1mZbdOGg] [2021-04-14T11:37:32,384][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [EMgrniGwSS2gTY2X-rDECw] [2021-04-14T11:37:32,991][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [0PNwXJgiTDa3ufvVfIpZKA] [2021-04-14T11:37:32,992][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [KfciNSgPSfyqt-fast7iSg] [2021-04-14T11:37:33,692][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [lKq1GR98SmWoFhFG3W9uIA] [2021-04-14T11:37:35,026][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [FF7E5zp8SyeZBbzK_v0yfA] [2021-04-14T11:37:35,500][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [rV7ZwCc2RUGUXsmkvmmw8g] [2021-04-14T11:37:36,395][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [H4JuDktoSUeooPzPO5CCQA] [2021-04-14T11:37:36,395][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [wAV0xrB9Q_WIvCVXgzDyOg] [2021-04-14T11:37:36,685][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [J12VTDwzRxCcBnPXUPrjRw] [2021-04-14T11:37:39,082][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [hh68m5uERpytUPwFEFDN1Q] [2021-04-14T11:37:39,882][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [ldkgIGSUSn60x4_L6rCZrQ] [2021-04-14T11:37:41,548][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [Qq2I0V2gT9y3-0eP-FbgLg] [2021-04-14T11:37:41,890][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][3]]]). [2021-04-14T11:38:24,081][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 527, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:38:34,085][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [527] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:38:44,268][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19224ms] ago, timed out [9210ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [18420] [2021-04-14T11:38:49,755][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 527, reason: Publication{term=2, version=527} [2021-04-14T11:38:59,765][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [528] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:39:58,169][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [557] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:40:15,775][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:40:18,176][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [557] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:40:28,191][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [558] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:40:48,196][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [558] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:40:58,206][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [559] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:41:18,217][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [559] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:41:23,067][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11418ms] ago, timed out [1401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [19570] [2021-04-14T11:41:28,222][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [560] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:41:48,177][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-0] node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}] is lagging at cluster state version [556], although publication of cluster state version [557] completed [1.5m] ago [2021-04-14T11:41:48,238][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [560] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:41:48,244][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: lagging], term: 2, version: 561, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:41:49,057][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 561, reason: Publication{term=2, version=561} [2021-04-14T11:41:49,114][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:41:49,187][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:41:49,200][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [59s] (19 delayed shards) [2021-04-14T11:41:49,203][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:41:49,289][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-04-14T11:41:49,600][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [MHh-_a1pR6SSS8wkjWDNjA] [2021-04-14T11:41:49,891][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [omp79AQRQlqyzbJF1Q-3SQ] [2021-04-14T11:42:48,509][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [DiODOEJaRD21oj9uzeW2NA] [2021-04-14T11:42:48,785][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [mRfQ6Ru9RHCwPWKp5RPxhg] [2021-04-14T11:42:49,418][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [plcNa2kKTB2kg6SDkdi22g] [2021-04-14T11:42:49,418][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [g1RrGgX3QHmyPKFElTgSSw] [2021-04-14T11:42:51,008][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [wHYSf9LcQdmQUobBP10Sew] [2021-04-14T11:42:51,008][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [TQI-NSv3TUK1O0UxDWZTcA] [2021-04-14T11:42:51,315][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [pT7iwlKBTpuqkZhNhRCBRw] [2021-04-14T11:42:52,385][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [FxcbJNeXT5m_FvDudn0vvQ] [2021-04-14T11:42:52,626][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [O7yoCG0JS_KNegpx-ZzLNw] [2021-04-14T11:42:54,555][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [MMOWB3mRSsiYPKsEGTRE8A] [2021-04-14T11:42:54,556][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [0EEeNE8JSXGS-_anjKxFng] [2021-04-14T11:42:55,022][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [zNeXrubsSp2bUXh80XlK8Q] [2021-04-14T11:42:56,903][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [lVHLn8a0TtutR87vW_HInw] [2021-04-14T11:42:57,762][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [t2Dzox_RQaq_Q6AbEM75eQ] [2021-04-14T11:42:59,185][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [xKTWmhG9S4GgX2C7JjGnhQ] [2021-04-14T11:43:01,284][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [MMJ7DtuJQduAmWpZnB0j_Q] [2021-04-14T11:43:01,664][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [XF2f-1FKQ7aF3OGle_HxAg] [2021-04-14T11:43:02,564][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][4]]]). [2021-04-14T11:48:16,633][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 596, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:48:26,641][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [596] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:48:46,642][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 596, reason: Publication{term=2, version=596} [2021-04-14T11:48:46,653][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [596] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:48:56,683][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [597] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:49:16,707][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [597] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:49:16,715][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 598, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:49:16,858][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 598, reason: Publication{term=2, version=598} [2021-04-14T11:53:10,929][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 599, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:53:20,935][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [599] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:53:36,844][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18420ms] ago, timed out [8413ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23060] [2021-04-14T11:53:40,935][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 599, reason: Publication{term=2, version=599} [2021-04-14T11:53:40,938][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [599] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:53:50,939][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [600] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:53:58,337][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17446ms] ago, timed out [2402ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23154] [2021-04-14T11:53:58,408][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16626ms] ago, timed out [6606ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23166] [2021-04-14T11:54:01,423][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T11:54:03,456][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17017ms] ago, timed out [2001ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23189] [2021-04-14T11:54:10,947][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [600] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T11:54:20,958][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [601] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:54:33,029][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27435ms] ago, timed out [17419ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23281] [2021-04-14T11:54:33,031][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16418ms] ago, timed out [6408ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [23329] [2021-04-14T11:54:40,982][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [601] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T11:55:06,038][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 602, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T11:55:06,153][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 602, reason: Publication{term=2, version=602} [2021-04-14T12:03:53,253][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}] org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-1][[fd00:100::52c]:9300][internal:cluster/coordination/join/validate] request_id [25522] timed out after [60070ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-14T12:04:38,564][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [105329ms] ago, timed out [45259ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [25522] [2021-04-14T12:07:09,157][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 603, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:07:19,165][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [603] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-hp5gh}{Q833XYo9Tk2UYlX9_IUP5Q}{TVef-0IOSKazonUnR-Qohw}{fd00:100:0:0:0:0:0:e4b0}{[fd00:100::e4b0]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-14T12:07:39,164][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 603, reason: Publication{term=2, version=603} [2021-04-14T12:07:39,170][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [603] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-hp5gh}{Q833XYo9Tk2UYlX9_IUP5Q}{TVef-0IOSKazonUnR-Qohw}{fd00:100:0:0:0:0:0:e4b0}{[fd00:100::e4b0]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-14T12:07:49,176][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [604] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:07:51,360][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [36637ms] ago, timed out [26629ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [26624] [2021-04-14T12:08:09,198][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [604] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:08:09,202][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 605, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:08:09,335][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 605, reason: Publication{term=2, version=605} [2021-04-14T12:25:17,152][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 606, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:25:22,235][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 606, reason: Publication{term=2, version=606} [2021-04-14T12:25:32,294][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [607] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T12:28:12,894][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10809ms] ago, timed out [801ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [32640] [2021-04-14T12:28:58,434][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:28:59,119][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15824ms] ago, timed out [800ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [32817] [2021-04-14T12:29:25,720][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21430ms] ago, timed out [11412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [32907] [2021-04-14T12:29:25,723][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10411ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [32948] [2021-04-14T12:32:56,060][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19015ms] ago, timed out [9006ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [33829] [2021-04-14T12:33:31,119][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:33:41,686][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 665, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:33:43,430][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33793ms] ago, timed out [23775ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [33951] [2021-04-14T12:33:43,432][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22775ms] ago, timed out [12760ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [33994] [2021-04-14T12:33:43,434][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11760ms] ago, timed out [1752ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [34035] [2021-04-14T12:33:43,637][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [42603ms] ago, timed out [27589ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [33921] [2021-04-14T12:33:43,762][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 665, reason: Publication{term=2, version=665} [2021-04-14T12:33:43,880][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-14T12:33:43,891][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:33:43,987][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,081][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,180][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,198][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,201][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,205][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.4s] (36 delayed shards) [2021-04-14T12:33:44,382][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T12:33:44,395][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:34:26,383][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 666, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:34:36,389][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [666] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:34:56,394][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 666, reason: Publication{term=2, version=666} [2021-04-14T12:34:56,398][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [666] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:35:06,490][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [667] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:35:20,593][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19428ms] ago, timed out [4406ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [34543] [2021-04-14T12:35:20,595][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [24293ms] ago, timed out [9212ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [34481] [2021-04-14T12:35:26,494][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [667] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:35:26,499][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 668, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:35:26,669][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 668, reason: Publication{term=2, version=668} [2021-04-14T12:35:26,781][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [je4cuY-yQNSobl9p9MJJwA] [2021-04-14T12:35:27,595][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [uVR9puVmTqeFLWxNeXQNRA] [2021-04-14T12:35:27,596][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [wyx3Ra3_StC_BAL4sMyI7Q] [2021-04-14T12:35:27,596][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [9yMS5PZWRVu_7nuP9AOCtA] [2021-04-14T12:35:29,304][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [jsF43BxATeaohQhrRneMrw] [2021-04-14T12:35:29,522][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [0bcY4HKMQnuFVhjGzUPCug] [2021-04-14T12:35:29,522][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [-lrPlHFJRSW7St6maqTdPw] [2021-04-14T12:35:29,881][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [Uj_YKM25Qbmu7z9AnjOrOg] [2021-04-14T12:35:31,498][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [tXYXWo5aRn6JGIwXZQO1zw] [2021-04-14T12:35:31,696][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [kcS5IyKlR-i-nd4dHK5DeQ] [2021-04-14T12:35:31,697][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [WwkNtv-iT1Cp3eVUAdCVjQ] [2021-04-14T12:35:32,899][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [ptGZoW2rQEmdSwVfRrwDZQ] [2021-04-14T12:35:33,983][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [k-gZfXcqTiKNk-N-IHb8vA] [2021-04-14T12:35:37,384][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [2CWeLnSQRX-NypP7Ff0rKA] [2021-04-14T12:35:37,385][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [fFSVbes6SzmMc2HyVicOdQ] [2021-04-14T12:35:38,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [Gd-_n2dxRo-1aSgxkIMnBA] [2021-04-14T12:35:39,003][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [L41tInSeQzajTuV-81AA1Q] [2021-04-14T12:35:39,879][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [hTfqYTDJSR-_jZfXCO27Aw] [2021-04-14T12:35:39,880][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [O4E4pwgPSlm_DjVDY1VYHg] [2021-04-14T12:35:40,093][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [jitL-C7NSeWzOoFdBJdDqg] [2021-04-14T12:35:41,710][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [Cyf4DN59TkS79GXdSA03kQ] [2021-04-14T12:35:42,183][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [ts05O6R3T9y5E27VFjiasg] [2021-04-14T12:35:43,484][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][1] marking unavailable shards as stale: [elYSDBSXSD2UMfmAFCpm3g] [2021-04-14T12:35:43,485][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [Az4v-2PHRZ2caBn-r-co0A] [2021-04-14T12:35:43,702][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [xJ0PVZFCSzeeMZD8jpgSfg] [2021-04-14T12:35:46,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [KQwwCTGtSZqUM-_lYdbwTA] [2021-04-14T12:35:46,506][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [AnGEjdLKRaqOS1-lWMfAfg] [2021-04-14T12:35:47,348][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [1UWTXfYtSNWB8arayUtddQ] [2021-04-14T12:35:47,349][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [Iymsec9PS5ui4xhhjaVzYg] [2021-04-14T12:35:47,612][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [FKaK5_v_QSKD4xUcOIr_xg] [2021-04-14T12:35:47,890][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [hTK4LFleTWKawY8jnASTUw] [2021-04-14T12:35:49,113][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [TriK_Xq0Rj6pSf4Wy2LcQQ] [2021-04-14T12:35:49,278][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [zKAEZGI3TcSkIbAOvhc9ZA] [2021-04-14T12:35:50,282][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [JUh2t6McQBi0gKrea3nLsA] [2021-04-14T12:35:50,282][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [k2tln6MURPWyYuN5EGMPTA] [2021-04-14T12:35:51,736][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [xwBp3p8_R9-Hh1fr5D0RXw] [2021-04-14T12:35:52,136][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][4]]]). [2021-04-14T12:38:18,427][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 726, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:38:28,431][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [726] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:38:40,974][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19440ms] ago, timed out [9429ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [35937] [2021-04-14T12:38:41,865][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 726, reason: Publication{term=2, version=726} [2021-04-14T12:39:30,886][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [730] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T12:39:50,909][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [730] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T12:40:52,782][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [732] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T12:41:11,920][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [734] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:41:23,921][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:41:31,923][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [734] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:41:41,926][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [735] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:42:01,935][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [735] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:42:11,942][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [736] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:42:40,904][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [744] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:46:28,504][WARN ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][6847] overhead, spent [821ms] collecting in the last [1.1s] [2021-04-14T12:46:35,538][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12997ms] ago, timed out [3005ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [38938] [2021-04-14T12:48:03,411][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:48:07,174][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18630ms] ago, timed out [3804ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39302] [2021-04-14T12:48:48,418][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:48:52,237][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26837ms] ago, timed out [16816ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39457] [2021-04-14T12:48:52,238][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15816ms] ago, timed out [5805ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39498] [2021-04-14T12:48:53,040][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19621ms] ago, timed out [4607ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39479] [2021-04-14T12:49:37,246][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:49:39,925][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17621ms] ago, timed out [2602ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39683] [2021-04-14T12:50:35,431][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15420ms] ago, timed out [5606ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [39930] [2021-04-14T12:50:49,969][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10413ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40001] [2021-04-14T12:51:11,008][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:51:20,760][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19223ms] ago, timed out [9209ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40087] [2021-04-14T12:51:48,749][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27047ms] ago, timed out [17025ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40157] [2021-04-14T12:51:48,750][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16023ms] ago, timed out [6006ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40194] [2021-04-14T12:51:49,064][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [68096ms] ago, timed out [53075ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40012] [2021-04-14T12:52:32,947][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13415ms] ago, timed out [3405ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40378] [2021-04-14T12:52:34,591][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15216ms] ago, timed out [201ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40379] [2021-04-14T12:53:54,455][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19225ms] ago, timed out [9218ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40708] [2021-04-14T12:54:44,021][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17827ms] ago, timed out [2804ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [40922] [2021-04-14T12:55:28,508][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 786, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:55:29,081][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:55:30,358][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 786, reason: Publication{term=2, version=786} [2021-04-14T12:55:30,480][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,493][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,584][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,593][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,788][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,795][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,887][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-14T12:55:30,902][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-04-14T12:55:31,087][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.4s] (37 delayed shards) [2021-04-14T12:55:31,184][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-04-14T12:55:31,185][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-04-14T12:55:31,190][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-04-14T12:56:28,974][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [_5SX1LKsQH-kIANoBl-eww] [2021-04-14T12:56:29,203][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [E2lV1-ayQa6spuGBCBUEfA] [2021-04-14T12:56:29,203][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [JjND3WkiRl2vdKYJYa15kQ] [2021-04-14T12:56:30,513][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [cbynWDv1T_mjMLig7sSPvA] [2021-04-14T12:56:30,795][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [nOSG1GiKQCe5Ox6fJ3qVbA] [2021-04-14T12:56:31,566][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [S83HGCRqQv2E_ZD_5YDpUg] [2021-04-14T12:56:31,567][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [-4HJQtHwRJ6Ml3Y79XGkXg] [2021-04-14T12:56:32,096][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [R55Gbn5eRCyCnUyNWZ7rHg] [2021-04-14T12:56:32,666][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [3knjcO6lRyi19LpirDtWUw] [2021-04-14T12:56:32,957][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [h097urxtSfWg8qfQJDvQyw] [2021-04-14T12:56:33,482][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [N5hMZzvbSwSseHPhagSwKQ] [2021-04-14T12:56:33,483][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [9KQLpUssTdyUFxOsDklLAw] [2021-04-14T12:56:36,090][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [nPE59QymQyChSXLhV-lBcw] [2021-04-14T12:56:36,801][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [4O7K9SjhRp6hxoK9VjoRCQ] [2021-04-14T12:56:38,462][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [B9jF-aIGRJmtAXlIgg0sYQ] [2021-04-14T12:56:38,462][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [iYOvb6KZRiu1E6h5BwrJbQ] [2021-04-14T12:56:38,798][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [cwE6uGwpQueCINmSRN4h6A] [2021-04-14T12:56:39,383][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [SawtfIBJQ0uuxXbhT5jyfQ] [2021-04-14T12:56:40,106][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [Gg0jl5aTQ3KWz9TomDJpug] [2021-04-14T12:56:40,106][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [FtHp397pRMStO6TcNLGx8g] [2021-04-14T12:56:40,495][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [qGrqObysSX6dCnbB2ewtsw] [2021-04-14T12:56:41,458][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [cNfStns1R3SUCPpT-3R9Pg] [2021-04-14T12:56:41,702][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [5FulRs3xRU-5gj0Wx87Myg] [2021-04-14T12:56:43,281][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [IoTgsIV2TGy_z_vyG5yK1w] [2021-04-14T12:56:43,282][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [1HtGjz1uSpuITNw1gm-yFg] [2021-04-14T12:56:43,687][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [5kakrJktQGWdciQB6vLNNQ] [2021-04-14T12:56:44,533][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [d1UjzSiUQ8upxkBv9O6J7Q] [2021-04-14T12:56:45,404][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [B5weRoztT9eE1Rp9KzvvWQ] [2021-04-14T12:56:46,723][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [qNcGfkF6TdW5dquFPfSDOg] [2021-04-14T12:56:46,723][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [JIfE7hc9SNiEGGwBFMn0fw] [2021-04-14T12:56:47,255][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [11w_mm2qTg6Q0q747FmJ-g] [2021-04-14T12:56:48,283][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [ryjnqgPJQCeDEVeovOFPdg] [2021-04-14T12:56:48,790][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [95DSCXBBQwGcDjxFVZFQEQ] [2021-04-14T12:56:49,077][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [Z8mGpu3oSgChbXXsr5HvXw] [2021-04-14T12:56:49,078][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [uBLRcYGVTxambsDv9QbKVw] [2021-04-14T12:56:50,003][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [KJctgiPVRaCZvvk4DsxQFQ] [2021-04-14T12:56:50,257][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [RWiKxtJjQlqi8aClNsQexw] [2021-04-14T12:56:50,997][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][3]]]). [2021-04-14T12:58:48,784][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 848, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:58:58,790][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [848] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:59:18,790][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 848, reason: Publication{term=2, version=848} [2021-04-14T12:59:18,793][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [848] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:59:28,798][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [849] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:59:46,253][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T12:59:48,831][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [849] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T12:59:48,834][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 850, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T12:59:48,899][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 850, reason: Publication{term=2, version=850} [2021-04-14T13:17:53,831][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 851, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:17:59,352][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 851, reason: Publication{term=2, version=851} [2021-04-14T13:23:17,169][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T13:23:20,183][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 910, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:23:21,861][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 910, reason: Publication{term=2, version=910} [2021-04-14T13:23:21,912][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:23:21,986][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:23:21,997][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:23:22,092][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:23:22,184][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-14T13:23:22,193][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-04-14T13:23:22,204][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.9s] (37 delayed shards) [2021-04-14T13:23:22,295][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-14T13:23:22,304][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-14T13:24:20,602][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [gw7VIpB7RVmQ-03zMz0WyA] [2021-04-14T13:24:20,894][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [k20uR7rfQYmGa3FWcy79-A] [2021-04-14T13:24:20,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [7ep6CLYwQ0ydSD_9qGa9aQ] [2021-04-14T13:24:20,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [Bhb9njiOTJmlkDISJWDKxg] [2021-04-14T13:24:23,005][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [BgZZp8ZkQPS1ntT7HDLu9A] [2021-04-14T13:24:23,209][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [-Su7ny9rSsm5lIRAClIOLA] [2021-04-14T13:24:24,114][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [R4UE-KSoSnajJYxGTcoBUg] [2021-04-14T13:24:24,114][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [Hgr8bLN_T62bfDeMHhavTA] [2021-04-14T13:24:24,675][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [xw890hTASI65R1BKlar6Bw] [2021-04-14T13:24:25,582][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [Xwh23WZHSYaqo8PnA7f_MA] [2021-04-14T13:24:27,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [4JGofcRaQpG-tttxboqAMA] [2021-04-14T13:24:27,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [Gca3S65RT8uufj0SMD4Ifw] [2021-04-14T13:24:27,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [NoW0_VvMTdGJeqh-NWUdHw] [2021-04-14T13:24:27,654][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [elnucXHkTeuMOXj-n6PBHg] [2021-04-14T13:24:29,714][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [J3D6uvOETdK-cAL-349AaA] [2021-04-14T13:24:30,683][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 934, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:24:32,596][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 934, reason: Publication{term=2, version=934} [2021-04-14T13:24:34,220][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [dUJ5U58TQQG4OT6oYv96FQ] [2021-04-14T13:24:34,221][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [Ozfb5hdpR-yeZYbL8WqKgw] [2021-04-14T13:24:34,221][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [HGfBrKUWTBW3gDmGSpYb4w] [2021-04-14T13:24:34,932][WARN ][o.e.i.c.IndicesClusterStateService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking and sending shard failed due to [failed recovery] org.elasticsearch.indices.recovery.RecoveryFailedException: [guicutthrough-v5][2]: Recovery failed from {dev-sdnrdb-master-2}{y-7BEwEpSeOr9O1O2w03Rw}{Ye72TLwpQumEMWHlPufznQ}{fd00:100:0:0:0:0:0:c69e}{[fd00:100::c69e]:9300}{dmr} into {dev-sdnrdb-master-0}{ZxsDM5oETU2XXRgTLIIDtA}{9JT9cB29Rcyo13oOG9Jx1Q}{fd00:100:0:0:0:0:0:3e94}{[fd00:100::3e94]:9300}{dmr} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-2][[fd00:100::c69e]:9300][internal:index/shard/recovery/start_recovery] Caused by: java.lang.IllegalStateException: no local checkpoint tracking information available at org.elasticsearch.index.seqno.ReplicationTracker.initiateTracking(ReplicationTracker.java:1158) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.initiateTracking(IndexShard.java:2299) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$13(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$14(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$30(RecoverySourceHandler.java:648) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-14T13:24:48,714][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][3]]]). [2021-04-14T13:26:00,735][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11612ms] ago, timed out [1604ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51407] [2021-04-14T13:26:30,131][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12014ms] ago, timed out [2002ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51528] [2021-04-14T13:26:48,734][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T13:27:17,535][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19835ms] ago, timed out [9825ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51701] [2021-04-14T13:27:33,738][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T13:27:46,029][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27437ms] ago, timed out [17426ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51779] [2021-04-14T13:27:46,035][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16425ms] ago, timed out [6406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51832] [2021-04-14T13:27:50,653][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [32041ms] ago, timed out [17022ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [51780] [2021-04-14T13:28:41,283][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 995, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:28:42,258][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 995, reason: Publication{term=2, version=995} [2021-04-14T13:28:42,313][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][2] primary-replica resync completed with 0 operations [2021-04-14T13:28:42,318][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [58.9s] (36 delayed shards) [2021-04-14T13:28:42,384][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:28:42,393][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-04-14T13:28:42,480][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-04-14T13:29:42,173][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [fYiDMUGqQdCdlMQnXmphkg] [2021-04-14T13:29:43,014][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [PQoDs-U0RXuporYgz-vh7A] [2021-04-14T13:29:43,015][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [nRj-YWUJSYm0ro_q3ultqg] [2021-04-14T13:29:43,015][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [Kb8d0V0AT0OJbhdo5EgraQ] [2021-04-14T13:29:45,056][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [BuTYXedJRxG701uKOn1H7A] [2021-04-14T13:29:45,228][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [KqGv1rRiRt2csSCbtD3fTQ] [2021-04-14T13:29:45,909][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [jZY36F-4RwCBy7q-QFPr4A] [2021-04-14T13:29:45,909][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [guKwMBI3RouiEaRRqmSKdA] [2021-04-14T13:29:46,914][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [C51SsjHwQjyuNZxNtwH6Jg] [2021-04-14T13:29:47,185][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [QuB492XsRfawGs75ZEr4Gw] [2021-04-14T13:29:48,028][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [xk5PPRbtQde-38ziAoZqRg] [2021-04-14T13:29:48,029][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [iY3ISDqaTXGpdx_DfGhy-w] [2021-04-14T13:29:48,281][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [jEYDCD2PSQODzlyzXZTV_Q] [2021-04-14T13:29:49,723][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [umNpTypZSC6xUKnIm95TUw] [2021-04-14T13:29:49,994][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [hLw_0LiXQUaKJxMXtg5Tlg] [2021-04-14T13:29:50,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [g_BD9oaaQ-6GlDJJzVvW3Q] [2021-04-14T13:29:51,008][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [PLbeIkkhQMGga6KpQayJvA] [2021-04-14T13:29:51,008][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [frAXmyopS92CHzxK2m_b2A] [2021-04-14T13:29:51,494][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [EGmhLU21RKKYUePY2NiHcg] [2021-04-14T13:29:52,311][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [Tw6bnqoLRUOKA7GD47Ov2w] [2021-04-14T13:29:52,580][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [YItU7YlyR2mSF0FKkWa9bg] [2021-04-14T13:29:53,197][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [NeeLWz8oRl-i4bcpIqlAkw] [2021-04-14T13:29:53,197][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [2Q32ScW7TrK8EDyv4_es_A] [2021-04-14T13:29:53,692][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [sOMSy65OQcyc5S9Ds86_0g] [2021-04-14T13:29:54,921][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [joz1kTJdQXCYg38pRQ4e5A] [2021-04-14T13:29:55,182][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Ps5ZYZ-3Sr-7waLFrSSW3w] [2021-04-14T13:29:56,618][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [yswLzhuwREun6uFT2zqCgw] [2021-04-14T13:29:56,684][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [-8W40kRZQkO-zd_Fv2PVxQ] [2021-04-14T13:29:57,958][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [NWBVxy6gS72SWSg70MtTUw] [2021-04-14T13:29:58,683][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [VP6tE8hUSqu-zcZPPGidRg] [2021-04-14T13:29:59,686][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [ZCY7O_EET1WDJEv8p-iX1w] [2021-04-14T13:29:59,687][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [_eSj4sU_SFC7NdzVfNU_Lg] [2021-04-14T13:30:00,092][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [Se-2JQqVTgWtg4IPX_2DOw] [2021-04-14T13:30:00,787][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [UdP4aOuqS3S3mgFSeVSr_A] [2021-04-14T13:30:01,804][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [hAMEvxoRRNqYlVnREx6FhA] [2021-04-14T13:30:02,114][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [WisF9dUFQtie7cUy4uqCAg] [2021-04-14T13:30:02,784][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][2]]]). [2021-04-14T13:33:56,037][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 1059, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:34:06,046][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1059] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:34:26,047][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1059, reason: Publication{term=2, version=1059} [2021-04-14T13:34:26,054][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1059] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:34:36,060][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1060] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:34:41,054][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-14T13:34:45,034][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19036ms] ago, timed out [4004ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [54206] [2021-04-14T13:34:56,087][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1060] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:34:56,096][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 1061, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:34:56,354][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1061, reason: Publication{term=2, version=1061} [2021-04-14T13:35:29,440][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 1062, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:35:39,443][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1062] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:35:59,444][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1062, reason: Publication{term=2, version=1062} [2021-04-14T13:35:59,449][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1062] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:36:09,455][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1063] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:36:29,475][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1063] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-14T13:36:36,302][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 1064, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:36:36,461][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1064, reason: Publication{term=2, version=1064} [2021-04-14T13:43:47,140][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 1065, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:43:57,148][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1065] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T13:44:17,147][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1065, reason: Publication{term=2, version=1065} [2021-04-14T13:44:17,151][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [1065] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T13:44:27,158][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [1066] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T13:44:47,188][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [1066] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T13:45:00,738][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} reason: followers check retry count exceeded], term: 2, version: 1067, delta: removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T13:45:00,860][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1067, reason: Publication{term=2, version=1067} [2021-04-14T14:05:27,738][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} join existing leader], term: 2, version: 1068, delta: added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}} [2021-04-14T14:05:33,826][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}}, term: 2, version: 1068, reason: Publication{term=2, version=1068} [2021-04-14T14:05:43,835][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1069] is still waiting for {dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-14T14:07:51,290][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11815ms] ago, timed out [1802ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{GxlOGSMxRDyO9ww8Zdsfag}{FojDL4ZOSUO-dtDRcxMTBA}{fd00:100:0:0:0:0:0:52c}{[fd00:100::52c]:9300}{dmr}], id [63379]