By type
[38;5;6m [38;5;5m22:49:25.71 [0m [38;5;6m [38;5;5m22:49:25.79 [0m[1mWelcome to the Bitnami elasticsearch container[0m [38;5;6m [38;5;5m22:49:25.80 [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch[0m [38;5;6m [38;5;5m22:49:25.88 [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch/issues[0m [38;5;6m [38;5;5m22:49:25.89 [0m [38;5;6m [38;5;5m22:49:25.91 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch setup ** [38;5;6m [38;5;5m22:49:26.40 [0m[38;5;2mINFO [0m ==> Configuring/Initializing Elasticsearch... [38;5;6m [38;5;5m22:49:26.82 [0m[38;5;2mINFO [0m ==> Setting default configuration [38;5;6m [38;5;5m22:49:26.99 [0m[38;5;2mINFO [0m ==> Configuring Elasticsearch cluster settings... [38;5;6m [38;5;5m22:49:27.19 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::9161 10.242.145.97, will use fd00:100::9161 [38;5;6m [38;5;5m22:49:27.48 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::9161 10.242.145.97, will use fd00:100::9161 OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [38;5;6m [38;5;5m22:49:45.69 [0m[38;5;2mINFO [0m ==> ** Elasticsearch setup finished! ** [38;5;6m [38;5;5m22:49:45.91 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-04-25T22:50:24,489][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-04-25T22:50:24,589][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM home [/opt/bitnami/java] [2021-04-25T22:50:24,590][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-16899588818375626158, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-04-25T22:50:44,789][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [aggs-matrix-stats] [2021-04-25T22:50:44,791][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [analysis-common] [2021-04-25T22:50:44,792][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [geo] [2021-04-25T22:50:44,792][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-common] [2021-04-25T22:50:44,793][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-geoip] [2021-04-25T22:50:44,793][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-user-agent] [2021-04-25T22:50:44,794][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [kibana] [2021-04-25T22:50:44,794][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-expression] [2021-04-25T22:50:44,794][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-mustache] [2021-04-25T22:50:44,795][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-painless] [2021-04-25T22:50:44,795][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [mapper-extras] [2021-04-25T22:50:44,796][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [parent-join] [2021-04-25T22:50:44,796][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [percolator] [2021-04-25T22:50:44,797][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [rank-eval] [2021-04-25T22:50:44,797][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [reindex] [2021-04-25T22:50:44,797][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [repository-url] [2021-04-25T22:50:44,798][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [tasks] [2021-04-25T22:50:44,798][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [transport-netty4] [2021-04-25T22:50:44,799][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded plugin [repository-s3] [2021-04-25T22:50:45,892][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.118:/dockerdata-nfs/dev/elastic-master-0)]], net usable_space [179.1gb], net total_space [195.8gb], types [nfs4] [2021-04-25T22:50:45,894][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true] [2021-04-25T22:50:46,995][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] node name [dev-sdnrdb-master-0], node ID [Ze-vtMQ9QYeaWYqHv_0q7Q], cluster name [sdnrdb-cluster] [2021-04-25T22:51:41,695][INFO ][o.e.t.NettyAllocator ] [dev-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-04-25T22:51:43,398][INFO ][o.e.d.DiscoveryModule ] [dev-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings] [2021-04-25T22:51:48,792][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-04-25T22:51:51,789][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] initialized [2021-04-25T22:51:51,792][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] starting ... [2021-04-25T22:51:54,092][INFO ][o.e.t.TransportService ] [dev-sdnrdb-master-0] publish_address {[fd00:100::9161]:9300}, bound_addresses {[::]:9300} [2021-04-25T22:51:55,901][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.145.97:9300, remoteAddress=/10.242.93.168:35938}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:51:56,689][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.145.97:9300, remoteAddress=/10.242.93.168:35994}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:51:57,692][WARN ][o.e.t.TcpTransport ] [dev-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.145.97:9300, remoteAddress=/10.242.93.168:36020}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:51:58,298][INFO ][o.e.b.BootstrapChecks ] [dev-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-04-25T22:52:08,497][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]; discovery will continue using [10.242.145.97:9300, 10.242.93.168:9300, 10.242.36.217:9300, 10.242.93.170:9300] from hosts providers and [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-25T22:52:18,501][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]; discovery will continue using [10.242.145.97:9300, 10.242.93.168:9300, 10.242.36.217:9300, 10.242.93.170:9300] from hosts providers and [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-25T22:52:28,505][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]; discovery will continue using [10.242.145.97:9300, 10.242.93.168:9300, 10.242.36.217:9300, 10.242.93.170:9300] from hosts providers and [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-25T22:52:38,509][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]; discovery will continue using [10.242.145.97:9300, 10.242.93.168:9300, 10.242.36.217:9300, 10.242.93.170:9300] from hosts providers and [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-04-25T22:52:46,112][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] setting initial configuration to VotingConfiguration{Ze-vtMQ9QYeaWYqHv_0q7Q,IeNHB3AjSESvvbc3slnMCg,{bootstrap-placeholder}-dev-sdnrdb-master-2} [2021-04-25T22:52:47,923][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-0] failed to join {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::9161]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:52:48,511][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-0] failed to join {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, minimumTerm=1, optionalJoin=Optional[Join{term=2, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::9161]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:52:48,511][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-0] last failed join attempt was 516ms ago, failed to join {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::9161]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:52:48,588][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered or elected yet, an election requires 2 nodes with ids [Ze-vtMQ9QYeaWYqHv_0q7Q, IeNHB3AjSESvvbc3slnMCg], have discovered [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] which is a quorum; discovery will continue using [10.242.145.97:9300, 10.242.93.168:9300, 10.242.36.217:9300, 10.242.93.170:9300] from hosts providers and [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}] from last-known cluster state; node term 4, last-accepted version 0 in term 0 [2021-04-25T22:52:50,292][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-0] failed to join {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}, targetNode={dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-1][[fd00:100::5daa]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 4 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-25T22:52:50,497][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} elect leader, {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 4, version: 1, delta: master node changed {previous [], current [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T22:52:53,000][INFO ][o.e.c.c.CoordinationState] [dev-sdnrdb-master-0] cluster UUID set to [OTZ83KoGRVSrDcfSJG_3pw] [2021-04-25T22:52:53,892][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] master node changed {previous [], current [{dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1, reason: Publication{term=4, version=1} [2021-04-25T22:52:54,289][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader, {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 2, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-25T22:52:54,692][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-0] publish_address {[fd00:100::9161]:9200}, bound_addresses {[::]:9200} [2021-04-25T22:52:54,790][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] started [2021-04-25T22:52:55,089][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 2, reason: Publication{term=4, version=2} [2021-04-25T22:52:55,896][INFO ][o.e.g.GatewayService ] [dev-sdnrdb-master-0] recovered [0] indices into cluster_state [2021-04-25T22:52:57,311][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][65] overhead, spent [507ms] collecting in the last [1.1s] [2021-04-25T22:53:02,997][INFO ][o.e.c.s.ClusterSettings ] [dev-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false] [2021-04-25T22:53:06,588][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultcurrent-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:19,695][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [guicutthrough-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:28,298][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [inventoryequipment-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:36,292][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [networkelement-connection-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:42,399][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance15min-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:48,995][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [maintenancemode-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:54,194][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [connectionlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:53:59,788][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [eventlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:54:06,791][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [mediator-server-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:54:11,495][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:54:17,691][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} join existing leader], term: 4, version: 61, delta: added {{dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr}} [2021-04-25T22:54:20,196][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr}}, term: 4, version: 61, reason: Publication{term=4, version=61} [2021-04-25T22:54:20,505][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance24h-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-04-25T22:54:50,382][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][4]]]). [2021-04-25T23:06:33,600][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [31640ms] ago, timed out [21629ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [4945] [2021-04-25T23:06:33,608][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20628ms] ago, timed out [10611ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [4987] [2021-04-25T23:06:44,814][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [29636ms] ago, timed out [14615ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5000] [2021-04-25T23:07:07,419][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11418ms] ago, timed out [1406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5146] [2021-04-25T23:07:38,257][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19623ms] ago, timed out [9615ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [5235] [2021-04-25T23:07:48,997][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 123, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:07:59,091][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.9s] publication of cluster state version [123] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-25T23:08:03,559][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23812ms] ago, timed out [13817ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [5297] [2021-04-25T23:08:04,297][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [47439ms] ago, timed out [37630ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5232] [2021-04-25T23:08:04,299][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [36629ms] ago, timed out [26619ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5257] [2021-04-25T23:08:04,300][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [25614ms] ago, timed out [15604ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5294] [2021-04-25T23:08:04,521][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26214ms] ago, timed out [11214ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5288] [2021-04-25T23:08:07,401][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16624ms] ago, timed out [6610ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [5405] [2021-04-25T23:08:10,978][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 123, reason: Publication{term=4, version=123} [2021-04-25T23:08:11,497][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-04-25T23:08:11,589][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] primary-replica resync completed with 0 operations [2021-04-25T23:08:11,598][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-25T23:08:11,698][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] primary-replica resync completed with 0 operations [2021-04-25T23:08:11,795][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][0] primary-replica resync completed with 0 operations [2021-04-25T23:08:11,896][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:08:12,004][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][0] primary-replica resync completed with 0 operations [2021-04-25T23:08:12,106][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][0] primary-replica resync completed with 0 operations [2021-04-25T23:08:12,195][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [36.4s] (37 delayed shards) [2021-04-25T23:08:12,688][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-04-25T23:08:12,793][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][0] primary-replica resync completed with 0 operations [2021-04-25T23:08:50,892][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [XLHCpv_HQA6alDfoPPReFA] [2021-04-25T23:08:51,794][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [cJofLBITS7Cbi8k_7xSinQ] [2021-04-25T23:08:51,795][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [AxdZc50QTyyQgtmpEgmDxQ] [2021-04-25T23:08:51,795][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][0] marking unavailable shards as stale: [PXCTCVoiSKuGwiwqTd7VTw] [2021-04-25T23:08:55,017][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [6_k2rgnPQ5WIcpe7s78hvQ] [2021-04-25T23:08:55,316][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [yCjnTZkhThOSB-goTsaH2Q] [2021-04-25T23:08:56,210][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][0] marking unavailable shards as stale: [Y-TWNkmXRouusfvr1OhIbA] [2021-04-25T23:08:56,211][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [o1zrNHrRSoyl97QHIt-jqQ] [2021-04-25T23:08:58,897][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [S6zCZDSKTnurhec1p_VGLQ] [2021-04-25T23:08:58,898][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [R25XBJK9RPGXx99jVTpyoQ] [2021-04-25T23:09:00,301][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][0] marking unavailable shards as stale: [MwZcSZQYRzyDrc3kyLlmQQ] [2021-04-25T23:09:06,736][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [wGUlToC-RiiZ6v6yBLdVng] [2021-04-25T23:09:07,320][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [httQd7RBR3-8WvrpYOAJ_g] [2021-04-25T23:09:11,816][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [8aHBatRNR_y8d8tz6jRzjg] [2021-04-25T23:09:11,817][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [aFViLKEORka6rooCgzLFfg] [2021-04-25T23:09:12,910][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [kkbuCyWmS2CfQyJR-E_mSw] [2021-04-25T23:09:13,810][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 147, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:09:23,814][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [147] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:09:43,815][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 147, reason: Publication{term=4, version=147} [2021-04-25T23:09:43,829][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [147] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:09:53,891][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [148] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:10:06,189][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [51394ms] ago, timed out [41385ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [5979] [2021-04-25T23:10:06,808][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [40985ms] ago, timed out [30966ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6013] [2021-04-25T23:10:06,809][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [29965ms] ago, timed out [19892ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6054] [2021-04-25T23:10:06,810][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22958ms] ago, timed out [7878ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6101] [2021-04-25T23:10:13,901][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.9s] publication of cluster state version [148] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:10:13,996][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 149, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:10:14,487][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 149, reason: Publication{term=4, version=149} [2021-04-25T23:10:14,893][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][0] marking unavailable shards as stale: [yQUkz5KqTwS62MYKFk2LaA] [2021-04-25T23:10:15,550][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][0] marking unavailable shards as stale: [cObgJTXuTX6_-Bx2dkc3iA] [2021-04-25T23:10:15,551][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [L6TBd0y6RimHogGCg2JTqw] [2021-04-25T23:10:15,551][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [byRbHo7pSN2jctg2wQhfbw] [2021-04-25T23:10:21,389][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [PE5a-jlbQ5yM8d9PDkRpDA] [2021-04-25T23:10:22,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][1] marking unavailable shards as stale: [OJ_ECkDVTgy0t_DVF85RTQ] [2021-04-25T23:10:22,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [pkW4jmPdRIubUBAhWOxMjQ] [2021-04-25T23:10:22,291][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][0] marking unavailable shards as stale: [S7R_gRXfR8y9ipneZlEz-w] [2021-04-25T23:10:24,888][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [SKmpFtWxRMyJRvWDD0LQ6w] [2021-04-25T23:10:25,796][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] marking unavailable shards as stale: [pGWSOelUSPqXpHE5X6MR5A] [2021-04-25T23:10:25,797][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [2E0_b1BsR6WkyTcPwcwAOQ] [2021-04-25T23:10:25,798][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [EoBa28qARZ68giYAP7cZkg] [2021-04-25T23:10:28,407][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [prVCHo7eRe2wmT-TjDoX_A] [2021-04-25T23:10:28,667][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [S7er9l3mRFWQDxYCYKPaxw] [2021-04-25T23:10:28,688][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [OZU3XGi4R6y2o9Zp7k1M8Q] [2021-04-25T23:10:31,395][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][0] marking unavailable shards as stale: [xnBNLcvnQaGnGETK2sPR3A] [2021-04-25T23:10:31,694][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [fe5Sg_WIR4W-yS7rH8Zn3w] [2021-04-25T23:10:32,560][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 174, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:10:42,562][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.9s] publication of cluster state version [174] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:11:02,563][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 174, reason: Publication{term=4, version=174} [2021-04-25T23:11:02,566][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.9s] publication of cluster state version [174] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:11:12,572][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [175] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:11:25,584][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18840ms] ago, timed out [3808ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6706] [2021-04-25T23:11:29,940][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [30513ms] ago, timed out [20442ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6662] [2021-04-25T23:11:29,995][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [41723ms] ago, timed out [31714ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6623] [2021-04-25T23:11:30,033][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [52746ms] ago, timed out [42728ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [6568] [2021-04-25T23:11:30,892][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 176, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:11:31,282][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 176, reason: Publication{term=4, version=176} [2021-04-25T23:11:31,288][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 177, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:11:31,506][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 177, reason: Publication{term=4, version=177} [2021-04-25T23:11:31,610][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: disconnected], term: 4, version: 178, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:11:31,888][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 178, reason: Publication{term=4, version=178} [2021-04-25T23:11:31,896][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][0] marking unavailable shards as stale: [VvrmWakARwiMrXyNtK8YFA] [2021-04-25T23:11:31,896][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [l4OTClyOR6G-spW9o6WuQw] [2021-04-25T23:11:33,995][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][0] marking unavailable shards as stale: [VF1W8VKZRMGhv8TgkUgq-w] [2021-04-25T23:11:35,502][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 182, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:11:36,328][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 182, reason: Publication{term=4, version=182} [2021-04-25T23:11:41,065][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][1]]]). [2021-04-25T23:12:44,777][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [226] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:13:06,861][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} join existing leader], term: 4, version: 228, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r}} [2021-04-25T23:13:07,704][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r}}, term: 4, version: 228, reason: Publication{term=4, version=228} [2021-04-25T23:13:28,711][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [229] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:13:52,110][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [230] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:14:10,777][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11814ms] ago, timed out [1801ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [8178] [2021-04-25T23:14:12,132][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [230] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:14:22,140][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [231] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:14:42,158][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [231] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:14:52,194][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [232] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:15:07,125][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20628ms] ago, timed out [10609ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [8455] [2021-04-25T23:15:07,235][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17225ms] ago, timed out [2202ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [8473] [2021-04-25T23:16:54,930][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20226ms] ago, timed out [10213ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9244] [2021-04-25T23:17:25,838][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18431ms] ago, timed out [8417ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9411] [2021-04-25T23:17:46,376][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19423ms] ago, timed out [9412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9512] [2021-04-25T23:18:03,892][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [36047ms] ago, timed out [21030ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9518] [2021-04-25T23:18:30,510][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13817ms] ago, timed out [3803ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [9775] [2021-04-25T23:18:49,214][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:18:52,189][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 247, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:18:53,366][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33301ms] ago, timed out [23289ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9788] [2021-04-25T23:18:53,367][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22288ms] ago, timed out [12280ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9830] [2021-04-25T23:18:53,367][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11279ms] ago, timed out [1265ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [9882] [2021-04-25T23:18:53,786][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 247, reason: Publication{term=4, version=247} [2021-04-25T23:18:53,911][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:18:53,989][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,006][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,189][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,205][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,297][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.7s] (36 delayed shards) [2021-04-25T23:18:54,300][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,399][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:18:54,501][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:19:55,708][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [lFTztR4LS-yrNS7RMYECIg] [2021-04-25T23:19:56,824][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [EOCIg2acQI-QyFerECUarw] [2021-04-25T23:19:56,825][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [duF_rXwDQiq3ZufnpjoTcw] [2021-04-25T23:19:56,825][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [AXTazd2RQWih1TlXrxH6iA] [2021-04-25T23:20:01,016][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [8ylZJWXbS0uSDtR-qBjEfQ] [2021-04-25T23:20:01,328][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [PeGncM2cRtOR6VTcBb_WfQ] [2021-04-25T23:20:01,328][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [dYDw5os_QQqOq9cXCemOAQ] [2021-04-25T23:20:01,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [_i78LOn_QbSifpkxX60P0A] [2021-04-25T23:20:04,727][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [Hg1GVctgQsO07YVIFJuOHg] [2021-04-25T23:20:05,092][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [gEVp__l2QlaeZ2F1XcNOmw] [2021-04-25T23:20:05,889][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [yBmKlODWQqi34-dRkNt9MQ] [2021-04-25T23:20:05,890][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [pOH9khTdTBmGHcmKRjDQ2Q] [2021-04-25T23:20:08,514][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [VkairwN8R5yfCuK48I18zg] [2021-04-25T23:20:09,345][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [x4IvFvS6RE64L3f-hjtskQ] [2021-04-25T23:20:09,347][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [3e_YSP27TRSyJxRUnddctA] [2021-04-25T23:20:09,347][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [KMYDSsr2SCOCrb2pzJB4eA] [2021-04-25T23:20:12,010][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [rlidN0u4QYOz8_e5RbOViA] [2021-04-25T23:20:22,014][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [275] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:20:33,710][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20643ms] ago, timed out [10634ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [10770] [2021-04-25T23:20:34,086][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 276, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:20:44,089][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [276] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:20:50,681][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16019ms] ago, timed out [6011ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [10873] [2021-04-25T23:21:04,090][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 276, reason: Publication{term=4, version=276} [2021-04-25T23:21:04,096][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [276] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:21:14,101][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [277] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:21:34,105][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [48850ms] ago, timed out [38835ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [10927] [2021-04-25T23:21:34,110][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [277] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:21:34,194][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 278, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:21:34,289][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:21:34,486][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 278, reason: Publication{term=4, version=278} [2021-04-25T23:21:34,518][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][1] marking unavailable shards as stale: [zpEGXKBtSYyLG5sYyqocmg] [2021-04-25T23:21:34,518][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [tKclpf9FTCiqEfLaiOA2Kw] [2021-04-25T23:21:34,519][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [hoU-5SKjQKupYH12WZ64vA] [2021-04-25T23:21:35,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [RXfWu7ycRrqO6L9Yr6JzJw] [2021-04-25T23:21:38,937][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [iFZyYF9XSte0xWMrzR9ajw] [2021-04-25T23:21:39,252][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [qfgq7ULsSEirWp8WTnLBHw] [2021-04-25T23:21:40,728][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [a9Ams98iT2ePSPvAVen9rA] [2021-04-25T23:21:40,729][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [QXanBM2wRmmNHyi2s-fwcg] [2021-04-25T23:21:41,830][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [ZDJCMQfCTI-5FJ7xKtQsug] [2021-04-25T23:21:43,211][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [TDzjf7CLQlmNwZ8ZAmhy5w] [2021-04-25T23:21:43,904][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [td2q05tFTT6HgvT54wvMGQ] [2021-04-25T23:21:44,298][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [REEeeKYuS-iPiVips_3afA] [2021-04-25T23:21:45,517][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [QKUxMyrtQ22nrw4VWbHNaQ] [2021-04-25T23:21:46,526][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [7Q5Xqvk9S5OT-i8-zkKrYg] [2021-04-25T23:21:48,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [R9D8_OwDSbqhxUSzXQL9jg] [2021-04-25T23:21:48,490][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [6KhLRH0FSc-m0k_R2teaFQ] [2021-04-25T23:21:49,553][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [4VR9pQ-vQmC3M65cBkc2dw] [2021-04-25T23:21:52,662][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [QEdMr5gJQlCRUIdTK5Eksw] [2021-04-25T23:21:55,596][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [0ch_kAhJSbu4hm_r_EJcIw] [2021-04-25T23:21:57,017][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]). [2021-04-25T23:25:01,669][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 310, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:25:11,677][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [310] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:25:31,679][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 310, reason: Publication{term=4, version=310} [2021-04-25T23:25:31,688][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [310] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:25:31,693][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 311, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:25:32,895][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 311, reason: Publication{term=4, version=311} [2021-04-25T23:27:04,308][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 312, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:27:14,314][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [312] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:27:25,089][WARN ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][2128] overhead, spent [902ms] collecting in the last [1.4s] [2021-04-25T23:27:34,315][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 312, reason: Publication{term=4, version=312} [2021-04-25T23:27:34,322][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30.1s] publication of cluster state version [312] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:27:44,389][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [313] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:29:14,813][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [363] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:29:21,652][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:33:50,497][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 375, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:34:00,693][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [375] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-25T23:34:02,904][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:34:09,820][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17415ms] ago, timed out [7406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [16566] [2021-04-25T23:34:09,993][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 375, reason: Publication{term=4, version=375} [2021-04-25T23:34:10,190][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,194][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,294][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,303][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,394][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,489][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,589][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,604][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,703][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,711][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [39.7s] (37 delayed shards) [2021-04-25T23:34:10,889][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,893][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:34:10,994][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations [2021-04-25T23:34:45,965][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15418ms] ago, timed out [5410ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [16763] [2021-04-25T23:34:51,340][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [fY9JkUp5QCKsiv-GX_GQfQ] [2021-04-25T23:34:51,916][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [AlNqbk3xS1qtUi2XIbEM9g] [2021-04-25T23:34:51,917][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [gf900LxuQLWov8xj93nDHQ] [2021-04-25T23:34:51,917][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Rd6PInNkT1GW5kBqezIAsw] [2021-04-25T23:34:54,534][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [lWbdk593SuakeupBeqfxaA] [2021-04-25T23:34:54,810][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [FobMleWARTuJyI_ySmvPUg] [2021-04-25T23:34:54,810][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [PkdxvxU8SfSMphZ8nFBpvw] [2021-04-25T23:34:56,085][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [5Mk2fJODRuCSkC_w1RD25g] [2021-04-25T23:34:56,815][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [QJmogmS4SS6uLXrnNtlgWg] [2021-04-25T23:34:58,562][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [KUEKiLBuTiCpdVs9Zx1wLA] [2021-04-25T23:34:58,563][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [J6k1MfbnQqebLqB4wcSkOg] [2021-04-25T23:34:58,890][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [4jNPZF6dQfepMin_LJhW1Q] [2021-04-25T23:35:00,017][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [COdtq0XLS9656pYUbHzhgg] [2021-04-25T23:35:01,310][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [muii04R0RgOr8OxpKWGWFA] [2021-04-25T23:35:02,538][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [dm6nw8riQGSILjKATLCtYQ] [2021-04-25T23:35:02,539][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [tP3RxD-YQW-2lGuxtOGRNQ] [2021-04-25T23:35:03,193][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [9mtRaWt0SMCHFUTyZcGt8w] [2021-04-25T23:35:04,389][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [3INsUGfEQeOUUA6UEZEymA] [2021-04-25T23:35:05,751][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [LAkrpEQ7TziinCTvoSzIbQ] [2021-04-25T23:35:05,751][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [_hwMyQwZQSq7mZ3SjODxRw] [2021-04-25T23:35:06,284][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][1] marking unavailable shards as stale: [-0aQKG1RSS-koIiwJGscOQ] [2021-04-25T23:35:08,091][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [-pJALOr2Qhme4dz4yIy1gg] [2021-04-25T23:35:09,189][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [xGCfqEoxTWmMp8x-dTLX0g] [2021-04-25T23:35:09,190][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [O4Rt9XCJQVmXoHaMNglZWw] [2021-04-25T23:35:10,892][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [9jeLFpajSmmtSs_jDPspbw] [2021-04-25T23:35:12,161][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [_1AFr-cjQjSqEkyDJBEo4Q] [2021-04-25T23:35:13,492][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [UNQJJuXpQ2KJLi1I9pPskw] [2021-04-25T23:35:16,598][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [M_WSGVmTR5Sqi9iy9rFvIQ] [2021-04-25T23:35:16,599][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [y-1OuqIyRySOwKhq8wYKRQ] [2021-04-25T23:35:17,984][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [1-zWYzMKThe7P1gy_FmKyA] [2021-04-25T23:35:27,967][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [XM92ICYqQ5SjoCxdxCd6zA] [2021-04-25T23:35:28,339][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [UGOsol1ETwOeTsCBhemVJA] [2021-04-25T23:35:29,234][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [qcm0Rrj-QGWzpcpFSqLwRw] [2021-04-25T23:35:29,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [8Tts5fXaQESgteneFHpYCw] [2021-04-25T23:35:30,201][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [jbFK0QueS5mgBJSxYmsUpw] [2021-04-25T23:35:30,691][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [zyWGBe85QDmKda--2NiU7g] [2021-04-25T23:35:31,234][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [KNxZaYhdSueVQusjf1VX3A] [2021-04-25T23:35:31,895][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][2]]]). [2021-04-25T23:37:19,001][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 435, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:37:29,009][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [435] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:37:49,009][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 435, reason: Publication{term=4, version=435} [2021-04-25T23:37:49,015][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [435] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:37:58,185][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [29630ms] ago, timed out [19616ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [18387] [2021-04-25T23:37:58,186][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18616ms] ago, timed out [8607ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [18447] [2021-04-25T23:37:59,029][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [436] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:38:04,014][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:38:05,599][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16614ms] ago, timed out [1601ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [18490] [2021-04-25T23:38:19,060][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [436] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:38:19,065][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 437, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:38:20,767][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 437, reason: Publication{term=4, version=437} [2021-04-25T23:38:55,015][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 438, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:39:01,573][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 438, reason: Publication{term=4, version=438} [2021-04-25T23:39:11,591][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [439] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:39:31,610][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [439] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:40:44,863][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [440] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:41:04,880][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [440] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:41:14,890][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [441] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:41:34,893][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [441] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:41:44,899][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [442] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-25T23:41:49,009][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17815ms] ago, timed out [7806ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [19876] [2021-04-25T23:41:54,789][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [24822ms] ago, timed out [14811ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [19872] [2021-04-25T23:41:54,789][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13811ms] ago, timed out [3803ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [19929] [2021-04-25T23:42:04,923][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [442] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:42:29,147][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18844ms] ago, timed out [8816ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20082] [2021-04-25T23:42:33,179][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12022ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20141] [2021-04-25T23:42:35,099][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22047ms] ago, timed out [7209ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20098] [2021-04-25T23:43:07,214][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21418ms] ago, timed out [11410ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20279] [2021-04-25T23:43:07,215][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10409ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20320] [2021-04-25T23:44:23,932][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17418ms] ago, timed out [7405ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [20743] [2021-04-25T23:44:34,341][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 443, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:44:40,014][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-25T23:44:44,392][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [443] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:44:46,806][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [44441ms] ago, timed out [34434ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20724] [2021-04-25T23:44:46,895][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33433ms] ago, timed out [23420ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20789] [2021-04-25T23:44:46,895][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22419ms] ago, timed out [12411ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [20831] [2021-04-25T23:44:48,030][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 443, reason: Publication{term=4, version=443} [2021-04-25T23:44:48,106][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [46.2s] (2 delayed shards) [2021-04-25T23:45:44,336][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [444] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-25T23:45:47,783][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [25852ms] ago, timed out [15830ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [21126] [2021-04-25T23:45:47,787][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14829ms] ago, timed out [5005ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [21177] [2021-04-25T23:45:48,526][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [fq93CQYiTI6gmv0LovUtQA] [2021-04-25T23:45:48,921][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [ynge2KZqSB6uBpZg0tNajg] [2021-04-25T23:45:49,598][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance15min-v5][4]]]). [2021-04-25T23:46:09,820][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14017ms] ago, timed out [4004ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [21329] [2021-04-25T23:47:22,142][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 449, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:47:32,148][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [449] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:47:52,149][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 449, reason: Publication{term=4, version=449} [2021-04-25T23:47:52,153][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [449] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:48:02,158][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [450] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:48:21,934][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [47646ms] ago, timed out [37638ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [21841] [2021-04-25T23:48:21,935][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [36637ms] ago, timed out [26628ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [21911] [2021-04-25T23:48:21,951][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [58655ms] ago, timed out [48647ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [21786] [2021-04-25T23:48:22,180][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [450] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:48:22,184][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 451, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:48:25,776][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 451, reason: Publication{term=4, version=451} [2021-04-25T23:49:33,753][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 452, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:49:43,758][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [452] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:49:47,112][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12410ms] ago, timed out [2402ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [22518] [2021-04-25T23:49:50,209][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 452, reason: Publication{term=4, version=452} [2021-04-25T23:50:00,222][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [453] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-25T23:51:11,536][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12611ms] ago, timed out [2602ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23033] [2021-04-25T23:51:14,207][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15615ms] ago, timed out [602ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23029] [2021-04-25T23:51:27,600][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20023ms] ago, timed out [10008ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [23077] [2021-04-25T23:52:07,930][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [454] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:52:27,951][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [454] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:52:37,958][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [455] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:52:49,427][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15230ms] ago, timed out [5205ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23617] [2021-04-25T23:52:57,963][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [455] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:53:02,703][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15421ms] ago, timed out [400ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23690] [2021-04-25T23:53:07,990][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [456] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:53:28,010][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [456] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-25T23:53:45,724][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 457, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:53:48,246][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [34630ms] ago, timed out [24622ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23873] [2021-04-25T23:53:48,247][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23621ms] ago, timed out [13613ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23918] [2021-04-25T23:53:48,294][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12612ms] ago, timed out [2603ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [23978] [2021-04-25T23:53:49,137][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 457, reason: Publication{term=4, version=457} [2021-04-25T23:53:49,169][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [56.5s] (2 delayed shards) [2021-04-25T23:53:49,204][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-04-25T23:53:50,658][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[inventoryequipment-v5][4]]]). [2021-04-25T23:56:08,381][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18015ms] ago, timed out [8007ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [24733] [2021-04-25T23:58:18,920][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 461, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:58:28,930][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [461] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:58:48,933][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 461, reason: Publication{term=4, version=461} [2021-04-25T23:58:48,989][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [461] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:58:59,000][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [462] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:59:13,328][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19216ms] ago, timed out [9207ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [25714] [2021-04-25T23:59:19,034][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [462] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-25T23:59:19,038][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 463, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-25T23:59:27,446][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 463, reason: Publication{term=4, version=463} [2021-04-26T00:01:07,236][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 464, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:01:08,186][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 464, reason: Publication{term=4, version=464} [2021-04-26T00:01:38,455][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 465, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:01:38,588][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 465, reason: Publication{term=4, version=465} [2021-04-26T00:02:17,401][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 466, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:02:19,196][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 466, reason: Publication{term=4, version=466} [2021-04-26T00:02:29,204][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [467] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:02:49,233][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [467] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:02:49,290][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] unexpected failure while failing shard [shard id [[inventoryequipment-v5][3]], allocation id [qDcMMyMoSKG4dYCJbT31yQ], primary term [3], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [inventoryequipment-v5][3], node[b2WbhbUSQN2vhY_7wRUxuA], [R], s[STARTED], a[id=qDcMMyMoSKG4dYCJbT31yQ]], failure [RemoteTransportException[[dev-sdnrdb-master-2][[fd00:100::24d9]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[inventoryequipment-v5][3] operation primary term [3] is too old (current [4])]; ], markAsStale [true]] org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [3] did not match current primary term [4] at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-26T00:03:27,094][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 468, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:03:37,097][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [468] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:03:44,948][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 468, reason: Publication{term=4, version=468} [2021-04-26T00:04:23,200][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22825ms] ago, timed out [12817ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [27327] [2021-04-26T00:04:23,202][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11814ms] ago, timed out [1802ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [27374] [2021-04-26T00:05:13,088][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 469, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:05:20,002][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 469, reason: Publication{term=4, version=469} [2021-04-26T00:05:30,010][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [470] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:05:50,036][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [470] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:07:03,716][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10608ms] ago, timed out [601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [28319] [2021-04-26T00:08:44,481][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10614ms] ago, timed out [601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [28929] [2021-04-26T00:09:31,315][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [471] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:09:41,924][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18615ms] ago, timed out [8607ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [29223] [2021-04-26T00:09:46,600][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22420ms] ago, timed out [7406ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [29226] [2021-04-26T00:09:51,326][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.9s] publication of cluster state version [471] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:10:01,332][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [472] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:10:21,339][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [472] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:10:21,346][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 473, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:10:23,194][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 473, reason: Publication{term=4, version=473} [2021-04-26T00:10:23,198][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [58.1s] (1 delayed shards) [2021-04-26T00:10:23,615][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [DEwArrqZRHS7_IFNPwocHg] [2021-04-26T00:10:24,459][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[inventoryequipment-v5][3]]]). [2021-04-26T00:11:41,414][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 477, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:11:51,420][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [477] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:12:11,421][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 477, reason: Publication{term=4, version=477} [2021-04-26T00:12:11,425][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [477] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:12:21,432][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [478] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:12:41,464][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [478] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:12:41,489][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 479, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:12:43,875][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 479, reason: Publication{term=4, version=479} [2021-04-26T00:15:13,646][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22622ms] ago, timed out [12609ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [31018] [2021-04-26T00:15:13,648][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11609ms] ago, timed out [1601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [31059] [2021-04-26T00:17:47,901][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 480, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:17:47,951][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 480, reason: Publication{term=4, version=480} [2021-04-26T00:18:22,909][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 481, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:18:32,913][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [481] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:18:52,914][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 481, reason: Publication{term=4, version=481} [2021-04-26T00:18:52,921][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [481] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:18:52,923][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 482, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:19:02,927][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [482] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:19:22,829][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19829ms] ago, timed out [10020ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [32286] [2021-04-26T00:19:22,927][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 482, reason: Publication{term=4, version=482} [2021-04-26T00:19:22,931][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [482] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:19:32,937][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [483] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:19:52,961][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [483] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:21:59,801][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15813ms] ago, timed out [5805ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [33278] [2021-04-26T00:22:30,988][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13616ms] ago, timed out [3604ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [33461] [2021-04-26T00:22:52,875][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21019ms] ago, timed out [11011ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [33528] [2021-04-26T00:22:54,620][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11674ms] ago, timed out [1664ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [33591] [2021-04-26T00:23:27,554][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [484] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:23:47,563][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [484] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:23:57,567][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [485] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:24:17,570][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [485] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:24:19,048][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21428ms] ago, timed out [11415ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [34043] [2021-04-26T00:24:19,049][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10414ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [34113] [2021-04-26T00:24:19,049][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21428ms] ago, timed out [6411ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [34046] [2021-04-26T00:24:27,576][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [486] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:24:27,622][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T00:24:46,527][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26833ms] ago, timed out [16822ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [34175] [2021-04-26T00:24:47,603][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [486] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:24:47,614][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 487, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:24:49,754][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 487, reason: Publication{term=4, version=487} [2021-04-26T00:24:49,794][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.8s] (2 delayed shards) [2021-04-26T00:24:49,893][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T00:24:51,778][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][4]]]). [2021-04-26T00:25:29,139][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 491, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:25:29,187][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 491, reason: Publication{term=4, version=491} [2021-04-26T00:25:42,608][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 492, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:25:48,132][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 492, reason: Publication{term=4, version=492} [2021-04-26T00:28:04,674][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 493, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:28:14,679][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [493] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:28:34,680][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 493, reason: Publication{term=4, version=493} [2021-04-26T00:28:34,686][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [493] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:28:44,693][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [494] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:28:49,686][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T00:28:53,820][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19218ms] ago, timed out [4204ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [35557] [2021-04-26T00:29:04,719][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [494] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T00:29:04,726][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 495, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:29:14,729][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [495] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T00:29:23,607][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27636ms] ago, timed out [17619ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [35672] [2021-04-26T00:29:23,608][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16818ms] ago, timed out [6808ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [35729] [2021-04-26T00:29:24,011][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 495, reason: Publication{term=4, version=495} [2021-04-26T00:33:38,876][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 496, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:33:38,993][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 496, reason: Publication{term=4, version=496} [2021-04-26T00:34:58,301][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 497, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:35:08,306][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [497] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:35:28,306][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 497, reason: Publication{term=4, version=497} [2021-04-26T00:35:28,312][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [497] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T00:36:03,775][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 498, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:36:03,983][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 498, reason: Publication{term=4, version=498} [2021-04-26T00:36:47,538][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 499, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:36:50,928][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 499, reason: Publication{term=4, version=499} [2021-04-26T00:39:40,088][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [20818ms] ago, timed out [10810ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [38799] [2021-04-26T00:43:26,499][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 500, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:43:28,905][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 500, reason: Publication{term=4, version=500} [2021-04-26T00:43:38,914][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [501] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:43:58,931][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [501] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T00:44:17,064][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11810ms] ago, timed out [1802ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [40346] [2021-04-26T00:45:28,935][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-0] node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] is lagging at cluster state version [500], although publication of cluster state version [501] completed [1.5m] ago [2021-04-26T00:45:28,943][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: lagging], term: 4, version: 502, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T00:45:37,290][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T00:45:37,743][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14819ms] ago, timed out [4804ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [40817] [2021-04-26T00:45:37,798][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 502, reason: Publication{term=4, version=502} [2021-04-26T00:45:37,816][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15420ms] ago, timed out [400ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [40813] [2021-04-26T00:56:49,731][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 503, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T00:56:49,805][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 503, reason: Publication{term=4, version=503} [2021-04-26T01:04:30,230][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 504, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:04:31,588][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 504, reason: Publication{term=4, version=504} [2021-04-26T01:04:31,703][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 505, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:04:37,001][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 505, reason: Publication{term=4, version=505} [2021-04-26T01:07:07,322][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10809ms] ago, timed out [800ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [48110] [2021-04-26T01:09:52,614][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:09:55,391][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 568, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:09:57,312][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [34065ms] ago, timed out [24052ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [48908] [2021-04-26T01:09:57,313][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23051ms] ago, timed out [13043ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [48952] [2021-04-26T01:09:57,313][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12042ms] ago, timed out [2032ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [49003] [2021-04-26T01:09:58,929][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 568, reason: Publication{term=4, version=568} [2021-04-26T01:09:59,005][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,095][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,194][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,288][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,294][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,303][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [36666ms] ago, timed out [21649ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [48900] [2021-04-26T01:09:59,402][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,412][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,502][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,512][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,515][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [55.7s] (37 delayed shards) [2021-04-26T01:09:59,603][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T01:09:59,700][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:10:57,388][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [8C0MwHkjT_Cl40HZjJMWGw] [2021-04-26T01:10:58,547][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [liBpPLI8Qqq-dpXEt9hVTw] [2021-04-26T01:10:58,547][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [1ZqSyh3IRfWSJihIEmMO0g] [2021-04-26T01:10:58,548][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [Zi1elzPVRNWILmJ1IoNeQQ] [2021-04-26T01:11:00,788][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [FECanjCPQtK5AwRzFZqLpQ] [2021-04-26T01:11:02,328][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [63vLOWlKTuaki5B02RBSlQ] [2021-04-26T01:11:02,329][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [iOOJ_6EtTHavkL1pIXruEA] [2021-04-26T01:11:02,329][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [EPGnutVlQK6f0lwGn7taRw] [2021-04-26T01:11:06,205][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [sMC4VSIsSoqEq3dDbJaEMQ] [2021-04-26T01:11:07,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [tS2vp1Q7RQa1OA8h4ft0Fw] [2021-04-26T01:11:07,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [CkJsiT4ERuSMY6LjSz8Krg] [2021-04-26T01:11:07,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [Y0b0cBbsSmCaDtOCJoZxow] [2021-04-26T01:11:09,492][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [ersBZLITRV-5eJIAh_SEwg] [2021-04-26T01:11:09,703][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][1] marking unavailable shards as stale: [_cejIPRWS7GZoo-os_4FLQ] [2021-04-26T01:11:09,704][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [41va53F8RX22bFz1ZVb3Aw] [2021-04-26T01:11:12,585][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [3rKiynQ_QmmcfGNPJPPNnQ] [2021-04-26T01:11:13,510][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [zgdSSYs9RViAsvn2du8lVQ] [2021-04-26T01:11:14,716][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [j_crd9zYQXuGQkvjMDPKuA] [2021-04-26T01:11:14,717][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [OVXiWiF9RT-iCp7MBh0kqA] [2021-04-26T01:11:16,397][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [TSKfxTYQSFu_wYWI1_5DQQ] [2021-04-26T01:11:18,319][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [y1deHy6VSmSPG6YH3IoXZw] [2021-04-26T01:11:18,902][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [AN8tG5xRRMi-vf-iNNpD0A] [2021-04-26T01:11:19,983][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [1avSh2dhQky2AHxiyd_lKA] [2021-04-26T01:11:19,983][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [cuwlUikhTzCY4ROzh35yqQ] [2021-04-26T01:11:21,360][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [xPZgbo1DR9aIOubEtxSOfg] [2021-04-26T01:11:21,798][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [DQ1o6swTRiO2YjU3uE_zQw] [2021-04-26T01:11:23,086][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [WC4jvVNvRcaLw3grjgbgjA] [2021-04-26T01:11:23,086][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [jx8wUyTlRtKWcT411TyTKw] [2021-04-26T01:11:23,990][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [1zCpy83FQ_CwqMDbR51EvA] [2021-04-26T01:11:24,823][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [rSxc83QpTC-BWJ2z-X9frg] [2021-04-26T01:11:25,810][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [Yj9h8YabTvSUBajaG6B46A] [2021-04-26T01:11:25,811][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [PbpOTaxlQfaF_1pLlhsKBg] [2021-04-26T01:11:28,123][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [HCr16Y_OQGatQHzOSGzk-g] [2021-04-26T01:11:28,605][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [ico4CxXRRUuYHpleeknubw] [2021-04-26T01:11:29,873][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 622, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:11:39,876][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [622] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:11:59,876][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 622, reason: Publication{term=4, version=622} [2021-04-26T01:11:59,879][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [622] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:11:59,884][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [fvHmGkW9QUy3ABwnshrEEA] [2021-04-26T01:11:59,885][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [Kx_h2nh-RNeXGZWzG4PJ8w] [2021-04-26T01:12:09,901][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [623] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:12:29,904][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [623] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:12:29,908][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 624, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:12:34,204][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10613ms] ago, timed out [602ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [50540] [2021-04-26T01:12:34,613][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 624, reason: Publication{term=4, version=624} [2021-04-26T01:12:34,999][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [1x00qUNtSImJ889XdNLiug] [2021-04-26T01:12:37,495][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]). [2021-04-26T01:19:44,807][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 629, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:19:54,815][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [629] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:20:14,816][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 629, reason: Publication{term=4, version=629} [2021-04-26T01:20:14,822][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [629] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:20:24,830][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [630] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:20:29,822][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:20:44,856][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [630] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:20:44,861][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 631, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:20:54,863][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [631] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:21:14,864][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 631, reason: Publication{term=4, version=631} [2021-04-26T01:21:14,887][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [631] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:21:14,890][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 632, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:21:15,096][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 632, reason: Publication{term=4, version=632} [2021-04-26T01:21:39,920][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 633, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:21:43,316][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 633, reason: Publication{term=4, version=633} [2021-04-26T01:26:22,346][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 634, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:26:22,416][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 634, reason: Publication{term=4, version=634} [2021-04-26T01:27:33,882][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 635, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:27:34,818][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 635, reason: Publication{term=4, version=635} [2021-04-26T01:28:30,947][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 636, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:28:40,953][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [636] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T01:29:00,954][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 636, reason: Publication{term=4, version=636} [2021-04-26T01:29:00,959][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.9s] publication of cluster state version [636] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T01:29:10,968][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [637] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:29:30,988][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [637] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:29:32,218][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 638, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:29:33,833][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33677ms] ago, timed out [23669ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [55687] [2021-04-26T01:29:33,835][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22668ms] ago, timed out [12649ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [55755] [2021-04-26T01:29:33,835][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11648ms] ago, timed out [1640ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [55825] [2021-04-26T01:29:42,220][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [638] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:29:47,808][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27668ms] ago, timed out [12811ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [55804] [2021-04-26T01:29:47,808][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [46895ms] ago, timed out [31881ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [55690] [2021-04-26T01:30:02,222][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 638, reason: Publication{term=4, version=638} [2021-04-26T01:30:02,248][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [638] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:30:50,937][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [29026ms] ago, timed out [19017ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [56133] [2021-04-26T01:30:50,942][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18015ms] ago, timed out [8007ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [56180] [2021-04-26T01:31:50,884][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [25427ms] ago, timed out [15415ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [56430] [2021-04-26T01:31:50,886][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14414ms] ago, timed out [4403ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [56475] [2021-04-26T01:32:24,120][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11210ms] ago, timed out [1201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [56658] [2021-04-26T01:34:28,365][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10209ms] ago, timed out [200ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [57241] [2021-04-26T01:34:39,605][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 639, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:34:40,826][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 639, reason: Publication{term=4, version=639} [2021-04-26T01:37:52,886][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17414ms] ago, timed out [7406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [59107] [2021-04-26T01:38:04,517][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:38:10,916][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21429ms] ago, timed out [6405ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [59175] [2021-04-26T01:44:50,294][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 701, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:45:00,299][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [701] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:45:17,286][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:45:20,299][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 701, reason: Publication{term=4, version=701} [2021-04-26T01:45:20,403][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,492][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][1] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,594][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,697][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,710][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][2] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,809][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,891][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,912][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-26T01:45:20,988][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:21,089][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-04-26T01:45:21,101][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [29.1s] (36 delayed shards) [2021-04-26T01:45:21,102][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30.9s] publication of cluster state version [701] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:45:21,196][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:21,215][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-04-26T01:45:51,504][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [Q27u3lA6QFi-3ToxvhyhYA] [2021-04-26T01:45:51,803][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [FzJ5DKq7RbyPV_mPU1dTag] [2021-04-26T01:45:51,804][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [KckNbmyuR9GOfcVDsV7Ewg] [2021-04-26T01:45:51,805][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [PLGsCJJ7RDi2AkoI3PymVw] [2021-04-26T01:45:54,547][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [B4g0PpQCTuWABC4DK1emSA] [2021-04-26T01:45:54,769][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [PFaCCrV2THG0LqM5k_Dkmg] [2021-04-26T01:45:55,153][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [YFeyOgKHQ1mQj9C7DDSiFg] [2021-04-26T01:45:55,154][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [TYnjOa4YT9GNIF2XTPTMhw] [2021-04-26T01:45:56,805][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [w5BFsjlfTuira2XenKy8jw] [2021-04-26T01:45:57,170][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [uMH4TnO-RXesZrlM70DieQ] [2021-04-26T01:45:57,171][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [7ZcwM2zIRziz_1AkWTHKbA] [2021-04-26T01:45:57,171][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [EmTSLMjHRQ6BdCP-ETjfaw] [2021-04-26T01:45:59,429][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [PHmedAbKQeylay2bIVTUqg] [2021-04-26T01:45:59,682][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [Lm8G0gNCRJGLtsFEcczBMQ] [2021-04-26T01:45:59,682][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [IxnOG4rUTcubgBpKF9KomA] [2021-04-26T01:46:01,153][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [jidhIQbpRIKhm1QHOABjMQ] [2021-04-26T01:46:01,725][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [sfvyXjbHQ0yytsoK1efQbA] [2021-04-26T01:46:03,193][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [721ZMdyESQWeIQTK1n93SA] [2021-04-26T01:46:03,194][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [mFZOCOnlQB2GXSXTO9_Qcg] [2021-04-26T01:46:08,439][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [VZuc2ztYTBObb6t_lxKtfg] [2021-04-26T01:46:08,740][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [zF_B72-ET2yqeomXgXCwaw] [2021-04-26T01:46:10,942][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [7d2TpGgGRgKJwO_2kE_rFA] [2021-04-26T01:46:10,943][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [oQx45lYrRVyLmN5nDCZCEw] [2021-04-26T01:46:11,823][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [xssVZd1sR3OK93iW3t7mLQ] [2021-04-26T01:46:12,994][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [n7FQxYaGRZKvteuJmMW7zg] [2021-04-26T01:46:13,232][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [-6fQZiP5RC21fgVMyYplig] [2021-04-26T01:46:14,390][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [BMz7f3UvSFSvDLKrL9Ss0w] [2021-04-26T01:46:14,391][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [Uoe96fuCSiiuIM7jVvDMyA] [2021-04-26T01:46:14,689][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [9577P_l5SFuGydgqJuA9Ag] [2021-04-26T01:46:16,346][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [k8ZYrkcxQcOrFFzEGaQ7cw] [2021-04-26T01:46:16,807][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [s5nZmho-Rx-8ud_cYbZ29A] [2021-04-26T01:46:17,977][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [es_qmQnNSh-12C8lfzWpcA] [2021-04-26T01:46:17,978][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [-0yBN0isTD6C7wzNFY7ymw] [2021-04-26T01:46:18,295][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [BXdc0HfAR_uSu2x5FchOlw] [2021-04-26T01:46:20,608][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [SYahlxvoQQKz9sjTchkEmA] [2021-04-26T01:46:25,894][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][10460] overhead, spent [278ms] collecting in the last [1s] [2021-04-26T01:46:30,611][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.9s] publication of cluster state version [756] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:46:50,614][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [756] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:46:50,616][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [MZ5zmdsmReS1Qc6vZrnnDQ] [2021-04-26T01:47:00,621][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [757] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:47:20,624][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [757] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T01:47:20,628][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 758, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:47:20,704][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 758, reason: Publication{term=4, version=758} [2021-04-26T01:47:21,029][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][2]]]). [2021-04-26T01:47:38,392][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 761, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:47:48,396][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [761] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:48:08,396][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 761, reason: Publication{term=4, version=761} [2021-04-26T01:48:08,400][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [761] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:48:10,400][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 762, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:48:10,438][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 762, reason: Publication{term=4, version=762} [2021-04-26T01:48:39,524][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 763, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:48:49,534][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [763] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:48:55,140][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14613ms] ago, timed out [4605ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [63221] [2021-04-26T01:48:55,146][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 763, reason: Publication{term=4, version=763} [2021-04-26T01:52:15,313][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 764, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:52:15,441][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 764, reason: Publication{term=4, version=764} [2021-04-26T01:53:38,125][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 765, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T01:53:39,332][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 765, reason: Publication{term=4, version=765} [2021-04-26T01:55:03,116][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 766, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:55:13,123][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [766] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:55:33,124][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 766, reason: Publication{term=4, version=766} [2021-04-26T01:55:33,132][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [766] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:55:43,138][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [767] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:55:48,130][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:56:02,819][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T01:56:03,164][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [767] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T01:56:03,170][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 768, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T01:56:13,172][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [768] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T01:56:26,706][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 768, reason: Publication{term=4, version=768} [2021-04-26T02:09:18,107][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 769, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:09:28,115][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [769] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:09:48,116][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 769, reason: Publication{term=4, version=769} [2021-04-26T02:09:48,120][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [769] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:09:58,125][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [770] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:10:03,120][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:10:18,144][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [770] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:10:18,147][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 771, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:10:18,289][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:10:19,169][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [30972ms] ago, timed out [16091ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [69730] [2021-04-26T02:10:28,149][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [771] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:10:48,151][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 771, reason: Publication{term=4, version=771} [2021-04-26T02:10:48,179][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30.1s] publication of cluster state version [771] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:10:51,318][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 772, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:10:51,371][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 772, reason: Publication{term=4, version=772} [2021-04-26T02:12:04,449][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 773, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:12:04,763][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 773, reason: Publication{term=4, version=773} [2021-04-26T02:19:47,952][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13026ms] ago, timed out [3009ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [72666] [2021-04-26T02:20:02,290][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 774, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:20:03,732][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 774, reason: Publication{term=4, version=774} [2021-04-26T02:22:28,699][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18619ms] ago, timed out [8612ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [74351] [2021-04-26T02:22:45,927][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:23:00,931][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:23:01,794][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 836, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:23:11,890][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [836] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:23:16,917][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 836, reason: Publication{term=4, version=836} [2021-04-26T02:23:17,005][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,094][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,288][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,294][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][1] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,492][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,603][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,689][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,693][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [44s] (37 delayed shards) [2021-04-26T02:23:17,800][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] primary-replica resync completed with 0 operations [2021-04-26T02:23:17,804][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-26T02:24:11,715][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [837] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T02:24:31,772][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [837] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T02:24:31,823][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [CSu9fbATR36my8eVyaa0bw] [2021-04-26T02:24:41,889][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [838] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T02:24:48,794][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [45876ms] ago, timed out [35868ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [74980] [2021-04-26T02:24:48,795][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [34867ms] ago, timed out [24848ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [75035] [2021-04-26T02:24:48,796][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23847ms] ago, timed out [13821ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [75067] [2021-04-26T02:24:48,848][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 839, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:24:48,927][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 839, reason: Publication{term=4, version=839} [2021-04-26T02:24:48,930][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [s89TO4jaTz-ZCWd7sfjneg] [2021-04-26T02:24:48,931][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [w17ot83QTqy6iFmXBpkWSA] [2021-04-26T02:24:48,931][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [lCTfB3xSTNG8IzILF9P9og] [2021-04-26T02:24:50,414][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Vy5BgYywR4SGnABW84mhZw] [2021-04-26T02:24:50,914][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [0qte9nvER_S9K0OLiwdI6A] [2021-04-26T02:24:52,589][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader, {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 848, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r},{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:24:54,400][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r},{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 848, reason: Publication{term=4, version=848} [2021-04-26T02:24:54,403][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [j90ZKXRyTve8TBhbobgT3Q] [2021-04-26T02:24:54,404][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [Qwxe9ud1T8mlUwr6NpcJQg] [2021-04-26T02:24:57,094][WARN ][o.e.i.c.IndicesClusterStateService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking and sending shard failed due to [failed recovery] org.elasticsearch.indices.recovery.RecoveryFailedException: [mediator-server-v5][1]: Recovery failed from {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} into {dev-sdnrdb-master-0}{Ze-vtMQ9QYeaWYqHv_0q7Q}{F4c8Q1WqRqyCd3jpUssufw}{fd00:100:0:0:0:0:0:9161}{[fd00:100::9161]:9300}{dmr} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-2][[fd00:100::24d9]:9300][internal:index/shard/recovery/start_recovery] Caused by: java.lang.IllegalStateException: no local checkpoint tracking information available at org.elasticsearch.index.seqno.ReplicationTracker.initiateTracking(ReplicationTracker.java:1158) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.initiateTracking(IndexShard.java:2299) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$13(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$14(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$30(RecoverySourceHandler.java:648) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-26T02:25:11,911][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][4]]]). [2021-04-26T02:27:22,041][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16224ms] ago, timed out [6208ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [76852] [2021-04-26T02:28:08,678][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 909, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:28:18,691][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [909] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:28:28,634][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 909, reason: Publication{term=4, version=909} [2021-04-26T02:28:28,638][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [40s] (37 delayed shards) [2021-04-26T02:29:10,941][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [p-GuqE48TWC7hXHUtLDiPg] [2021-04-26T02:29:11,691][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [P_7ouHC2RnKGd8ux5wwQfQ] [2021-04-26T02:29:11,692][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [ToIwEdqbSU2CK0bKia_yig] [2021-04-26T02:29:11,692][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [yyu_mYEoTdyXMbZFNLQuEw] [2021-04-26T02:29:15,902][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [IwYAtD_dTtyMtyU-FvYKiw] [2021-04-26T02:29:22,515][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [I3eGOogQR0WqpWisHpwl1A] [2021-04-26T02:29:22,517][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Xc5lcfyzSbaA4rksBIofHA] [2021-04-26T02:29:22,517][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [tmf_oK_xS9Ssk-lDym7Hgg] [2021-04-26T02:29:25,589][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [_j2WZTMxR6qxaKBCZNBPbw] [2021-04-26T02:29:26,280][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [qb6MPOoHTw-fUNNZDVKY-Q] [2021-04-26T02:29:26,281][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [PGie0QHCTP2RNrNzZWiaGQ] [2021-04-26T02:29:26,282][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [YbJq25KYR4eajAJqQszS1Q] [2021-04-26T02:29:28,687][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [nXHrqYBpSmqQmLI35N2pYQ] [2021-04-26T02:29:29,331][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [lc5quU1fSaS8St8nkCjbDQ] [2021-04-26T02:29:29,332][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][1] marking unavailable shards as stale: [XsdVEiVATm2ui-BfvFJ6Ng] [2021-04-26T02:29:29,332][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [qHHRyYQOS665T1l9fe2C4A] [2021-04-26T02:29:32,417][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [qadcdlJSQTWcBXQubn_wlQ] [2021-04-26T02:29:32,657][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [Uk4fbmnFQ9SQDQEAXGXd9g] [2021-04-26T02:29:32,657][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [RkC9t27wRYKIUaCY9LjECA] [2021-04-26T02:29:36,293][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [1v8JkFTcQLuLk-IZequ2FQ] [2021-04-26T02:29:36,788][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [aAHEaLkESGuNteb_gvEn7Q] [2021-04-26T02:29:37,979][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [ZJoAu7c8TRiEVuNN4awtjw] [2021-04-26T02:29:37,979][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [P6LquUF1Ru2QZz4_teS3oQ] [2021-04-26T02:29:38,228][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [W0vijD4eQbSQy8wnNbubHg] [2021-04-26T02:29:48,648][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [942] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:29:52,764][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11624ms] ago, timed out [1602ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [78116] [2021-04-26T02:29:53,719][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [aNMfBdSgSr6YzPwaupTNuQ] [2021-04-26T02:29:54,812][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [nhAV5jduQOmQ5jtjP7YPmQ] [2021-04-26T02:29:56,120][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [lEY2_HY9Rbq4DtU6fi7zBA] [2021-04-26T02:29:56,121][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [ojREmVFIS9OEz63WP0rArA] [2021-04-26T02:29:56,398][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [07NWA3z9QaaWSh1yCmGCnw] [2021-04-26T02:29:57,304][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [kRzFec5JTdS9rY-4uu5iXA] [2021-04-26T02:29:57,620][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [npEypdmTSvmzjaa9u7N47Q] [2021-04-26T02:30:03,972][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [tyhaETsrSmyJOmou5L14JA] [2021-04-26T02:30:03,973][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [SusLwCEhSwqi5fYp0XpE4w] [2021-04-26T02:30:04,404][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [ZmSeeoTiTcK1B_x5noVCtQ] [2021-04-26T02:30:05,681][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [YETD4P3XRx-GoCRO-2fTzA] [2021-04-26T02:30:06,992][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [OmSXevv1RR6c-D2PQjZ6iQ] [2021-04-26T02:30:09,690][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [iQ-8uPSNSs-CXg2OlZPoLQ] [2021-04-26T02:30:10,801][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][2]]]). [2021-04-26T02:34:02,002][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11210ms] ago, timed out [1201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [79667] [2021-04-26T02:34:14,135][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 966, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:34:24,139][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [966] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:34:44,141][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 966, reason: Publication{term=4, version=966} [2021-04-26T02:34:44,148][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [966] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:34:54,156][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [967] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:35:14,180][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [967] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:35:14,188][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 968, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:35:14,423][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 968, reason: Publication{term=4, version=968} [2021-04-26T02:37:32,008][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-1][[fd00:100::5daa]:9300][internal:cluster/coordination/join/validate] request_id [80548] timed out after [60096ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-26T02:38:39,018][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-1][[fd00:100::5daa]:9300][internal:cluster/coordination/join/validate] request_id [80906] timed out after [60054ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-26T02:42:57,745][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [385841ms] ago, timed out [325745ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [80548] [2021-04-26T02:42:57,749][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [318740ms] ago, timed out [258686ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [80906] [2021-04-26T02:43:23,017][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 969, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:43:33,024][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [969] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:43:53,025][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 969, reason: Publication{term=4, version=969} [2021-04-26T02:43:53,032][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [969] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:44:03,039][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [970] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:44:08,030][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:44:22,898][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T02:44:23,060][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [970] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T02:44:23,073][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 971, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T02:44:33,075][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [971] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:44:53,075][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 971, reason: Publication{term=4, version=971} [2021-04-26T02:44:53,100][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [971] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:45:20,832][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 972, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:45:20,883][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 972, reason: Publication{term=4, version=972} [2021-04-26T02:46:38,740][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 973, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:46:48,744][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [973] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T02:46:57,773][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18022ms] ago, timed out [8008ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [83671] [2021-04-26T02:47:08,746][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 973, reason: Publication{term=4, version=973} [2021-04-26T02:47:08,752][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [973] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T02:47:12,412][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13627ms] ago, timed out [3617ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [83754] [2021-04-26T02:48:33,346][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [30634ms] ago, timed out [20626ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [84083] [2021-04-26T02:48:33,348][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19625ms] ago, timed out [9609ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [84135] [2021-04-26T02:55:40,596][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10208ms] ago, timed out [201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [86301] [2021-04-26T02:56:27,450][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 974, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T02:56:27,503][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 974, reason: Publication{term=4, version=974} [2021-04-26T03:04:20,790][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 975, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T03:04:21,189][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 975, reason: Publication{term=4, version=975} [2021-04-26T03:04:21,194][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 976, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:04:25,921][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 976, reason: Publication{term=4, version=976} [2021-04-26T03:06:50,652][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23235ms] ago, timed out [13227ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [90223] [2021-04-26T03:06:50,655][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12226ms] ago, timed out [2202ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [90275] [2021-04-26T03:07:06,898][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [40255ms] ago, timed out [25243ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [90209] [2021-04-26T03:08:26,909][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [25821ms] ago, timed out [15812ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [90738] [2021-04-26T03:08:26,912][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14811ms] ago, timed out [4803ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [90787] [2021-04-26T03:09:34,025][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [14421ms] ago, timed out [4406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [91162] [2021-04-26T03:13:35,001][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12611ms] ago, timed out [2802ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [92539] [2021-04-26T03:13:36,902][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:13:38,339][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16413ms] ago, timed out [1401ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [92532] [2021-04-26T03:18:14,510][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:18:21,736][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22225ms] ago, timed out [7213ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [94111] [2021-04-26T03:22:39,387][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11821ms] ago, timed out [1804ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [95610] [2021-04-26T03:22:54,517][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:23:11,808][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [47260ms] ago, timed out [32437ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [95586] [2021-04-26T03:23:32,124][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12410ms] ago, timed out [2402ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [95845] [2021-04-26T03:24:17,490][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1039, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:24:27,495][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1039] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:24:32,796][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:24:47,497][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1039, reason: Publication{term=4, version=1039} [2021-04-26T03:24:47,894][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:24:47,989][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:24:48,095][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:24:48,394][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:24:48,793][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:24:48,898][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [28.4s] (37 delayed shards) [2021-04-26T03:24:48,900][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [31.4s] publication of cluster state version [1039] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:24:49,007][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:24:49,109][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:25:07,116][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11418ms] ago, timed out [1402ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [96408] [2021-04-26T03:25:18,003][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [OpbE7-cxTtSvXxMtQTX9NA] [2021-04-26T03:25:18,978][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [CX9A7zchSmSUWzoA-ojnuQ] [2021-04-26T03:25:18,978][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [1s1QI3jMSKmJwFa0pKal1Q] [2021-04-26T03:25:18,978][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [9an_C1NGQzC_Oz7tsh6Y-g] [2021-04-26T03:25:20,491][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [X-lJTjy9TOymt18PaCo_LA] [2021-04-26T03:25:20,764][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [I12GvWjYSM6sb7ZAwF_nFQ] [2021-04-26T03:25:20,764][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [rjY0HsdrTEShvitjnjB8sA] [2021-04-26T03:25:22,217][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [-ebtETyzT9m48XYtWL2Mhw] [2021-04-26T03:25:23,188][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [ZPdhrk5YRzSmXSNM7yz-2g] [2021-04-26T03:25:24,287][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [Jn8pt1iuSw-1O2sR4iFzzw] [2021-04-26T03:25:24,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [oVzvfvDdSSqwBmTIlOUUpg] [2021-04-26T03:25:27,690][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [Nd3mgVfPRF2fvix9lMTrGg] [2021-04-26T03:25:27,691][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [82pBvepQR7CdSm8mH9MtVw] [2021-04-26T03:25:28,688][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [1qZd7rnjTkS1IBje6cNDYA] [2021-04-26T03:25:30,310][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1060, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:25:40,313][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1060] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:26:00,313][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1060, reason: Publication{term=4, version=1060} [2021-04-26T03:26:00,317][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1060] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:26:10,322][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1061] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:26:26,300][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23420ms] ago, timed out [8607ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [96998] [2021-04-26T03:26:30,327][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1061] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:26:30,336][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1062, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:26:32,008][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1062, reason: Publication{term=4, version=1062} [2021-04-26T03:26:32,087][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [1N5lcHwyQpW6ziPzPspvqA] [2021-04-26T03:26:32,456][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [7eKSgIyUSkuniok-4ZzZYg] [2021-04-26T03:26:32,457][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [BYs2O71aQsinQq18lCZJ-A] [2021-04-26T03:26:34,677][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [DYPL2PGFR2O2WE8vMJ989w] [2021-04-26T03:26:35,733][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [62QDhGtRTlOFjzlW4moO9A] [2021-04-26T03:26:36,315][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [p-LMZU7NQiGQ_ozYZWCioQ] [2021-04-26T03:26:36,315][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [_CmoFO9RQKixUD8GpR8HHw] [2021-04-26T03:26:37,210][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [82jmam4GT-GOfNOQWyMlLg] [2021-04-26T03:26:37,604][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [RjzrAjJ7RhC8WdPU7hOCwA] [2021-04-26T03:26:38,606][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [_EamR5q1SFi5jy45tMsb3A] [2021-04-26T03:26:38,607][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [jRHVDnPISzyjbajLgvfrEQ] [2021-04-26T03:26:40,698][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [xH3XHSGfTgCM7aAEcUegOw] [2021-04-26T03:26:41,289][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [lPrnxshZSI-BKCBF0RSSfw] [2021-04-26T03:26:42,012][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [5pKaWQfCS-OQKQwqW33Wiw] [2021-04-26T03:26:42,013][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [pQbOzvn_RM-8CBeIxF5Ofg] [2021-04-26T03:26:43,241][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [aQQN1j24RFOTL8BFjHSgYg] [2021-04-26T03:26:43,589][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [gnmJaw2JReqcOco-uR6DEg] [2021-04-26T03:26:44,221][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [G86WxBt6RnegW03H6qB8RA] [2021-04-26T03:26:44,222][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [choxsmTKSjS8e-PHZ2ZuOQ] [2021-04-26T03:26:45,144][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [FveIv0PRRhyuqA8aDyTepQ] [2021-04-26T03:26:45,682][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [MT7kD7NyScaT4llB57VZJQ] [2021-04-26T03:26:46,554][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [8nO-QLp7QSyC4f8dyblZmA] [2021-04-26T03:26:46,779][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [xhVh853eQqWwTniQ-UTzLQ] [2021-04-26T03:26:47,792][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]). [2021-04-26T03:27:35,688][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1101, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:27:37,324][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1101, reason: Publication{term=4, version=1101} [2021-04-26T03:31:36,707][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10608ms] ago, timed out [600ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [100081] [2021-04-26T03:33:11,797][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21419ms] ago, timed out [11410ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [100548] [2021-04-26T03:33:11,800][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10409ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [100600] [2021-04-26T03:35:54,894][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:36:11,697][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [31841ms] ago, timed out [16826ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [101475] [2021-04-26T03:36:55,089][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11814ms] ago, timed out [1801ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [101820] [2021-04-26T03:39:25,151][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:39:51,389][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1165, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:40:01,392][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1165] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:40:08,632][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1165, reason: Publication{term=4, version=1165} [2021-04-26T03:40:08,788][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:40:08,805][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:40:08,891][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:40:08,909][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:40:08,998][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-26T03:40:09,002][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [42.3s] (37 delayed shards) [2021-04-26T03:40:09,098][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:40:09,202][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:40:42,999][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1166, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:40:50,051][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1166, reason: Publication{term=4, version=1166} [2021-04-26T03:41:10,913][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1168] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:41:30,915][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1168] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T03:42:49,502][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [23020ms] ago, timed out [8009ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [103815] [2021-04-26T03:42:49,598][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19818ms] ago, timed out [9810ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [103841] [2021-04-26T03:43:42,813][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1169, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:43:49,504][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:43:52,816][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1169] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:43:57,451][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1169, reason: Publication{term=4, version=1169} [2021-04-26T03:43:57,524][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [45.2s] (2 delayed shards) [2021-04-26T03:43:57,691][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [NsuK6KKVRxeBbpWFE1UrpA] [2021-04-26T03:43:58,347][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [qhCzLBngSNqYvCpLS0ASRA] [2021-04-26T03:43:58,347][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [oxllMYYXTXGb2QUknBN1DQ] [2021-04-26T03:43:58,347][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [OADOWys0T2m0Yit9IyDGGw] [2021-04-26T03:43:59,994][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [-zRbLwi0RwKW7RnHI4t8Ow] [2021-04-26T03:44:00,688][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [YOR4spg3Twe1cTyWaa5PEA] [2021-04-26T03:44:00,689][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [phOeku2DSsCxDWlQUNSD5A] [2021-04-26T03:44:00,689][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [4TkSjVfgQZKZWGJRyDs-RA] [2021-04-26T03:44:02,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [x9v1ZlEqT8qRvHPqfooMDw] [2021-04-26T03:44:02,690][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [cuGjT8m4RdWOTYFYwpbn3w] [2021-04-26T03:44:02,691][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [1_c8qqVZRp2k5JdR3C_CuA] [2021-04-26T03:44:03,805][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [BH_A-K0yTJOe5yxrHcyalQ] [2021-04-26T03:44:04,684][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [9cQXgV2qSMiCfw1-BEA94g] [2021-04-26T03:44:06,096][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [miIloJadQgydrC-NAJEDXA] [2021-04-26T03:44:06,097][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [l9MESDq5RZK2EIGEMaBvTg] [2021-04-26T03:44:06,993][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [ZNswFsvyTlijr2DeZuKdzQ] [2021-04-26T03:44:08,382][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [q0uQ_kH3R_aaF9JW8KiXbA] [2021-04-26T03:44:09,566][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [9ni-qPGKR8O5p9yDBs8mDQ] [2021-04-26T03:44:09,567][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [8MLyZYC9T-yF55jhYZcF0w] [2021-04-26T03:44:10,845][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [9tHg3Z5ERryUT8GQaZg4UQ] [2021-04-26T03:44:10,845][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [_hjwO1DpQ3a6UTpYAwWELg] [2021-04-26T03:44:11,388][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [VPwE6hWbS5OX1FFN-ZSnmg] [2021-04-26T03:44:11,913][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [P95_mzShQcqrsc-fU525Pg] [2021-04-26T03:44:12,826][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [DLz8sASwRhi08OqjyWxBcQ] [2021-04-26T03:44:12,826][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [gIbo6Uy0S8Gwv65x_AcfFg] [2021-04-26T03:44:13,222][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [AmwOUKoTS2mcEy2RDxTSDw] [2021-04-26T03:44:14,526][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [hn0tapHBRJqiQzagIduX2g] [2021-04-26T03:44:14,790][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [t5j-2i_vQ4aDJE6qQBQXYg] [2021-04-26T03:44:15,605][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [8fh6hpdgSQuwbYQIVlBJFw] [2021-04-26T03:44:15,606][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [U1APWtgMRJCsgpDKMv2GgA] [2021-04-26T03:44:16,804][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [-kqRd6tuSY-y-xNNYYdJUg] [2021-04-26T03:44:17,092][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [jGIPUIrKQimpaS5gUuYB4g] [2021-04-26T03:44:18,093][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [Wdi6v_9mRm-B6FcQAv6yeg] [2021-04-26T03:44:18,094][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [h9-cqviAT_KXc4eRizkCeQ] [2021-04-26T03:44:19,436][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [dBAavPujRIWSy0s0octbkg] [2021-04-26T03:44:52,817][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1223] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T03:44:57,464][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19226ms] ago, timed out [9412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [105099] [2021-04-26T03:44:57,578][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [K_lF4XcOSeuN3c4jz2LnUA] [2021-04-26T03:44:57,750][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [-YHgjAtvQbm4mvwwDFCySg] [2021-04-26T03:44:59,206][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance24h-v5][2]]]). [2021-04-26T03:45:05,116][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][17574] overhead, spent [290ms] collecting in the last [1s] [2021-04-26T03:45:42,444][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1228, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:45:52,448][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1228] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:46:12,448][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1228, reason: Publication{term=4, version=1228} [2021-04-26T03:46:12,455][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1228] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:46:22,462][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [1229] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T03:46:42,486][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [1229] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T03:46:42,491][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1230, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:46:42,903][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1230, reason: Publication{term=4, version=1230} [2021-04-26T03:47:53,215][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1231, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:47:53,767][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1231, reason: Publication{term=4, version=1231} [2021-04-26T03:49:55,567][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21421ms] ago, timed out [11412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [107617] [2021-04-26T03:49:55,570][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10411ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [107668] [2021-04-26T03:51:08,309][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12610ms] ago, timed out [2602ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [108041] [2021-04-26T03:52:20,373][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12210ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [108414] [2021-04-26T03:54:45,217][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T03:55:03,391][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1297, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T03:55:04,688][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1297, reason: Publication{term=4, version=1297} [2021-04-26T03:55:04,803][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:55:04,810][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33422ms] ago, timed out [23414ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [109193] [2021-04-26T03:55:04,888][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22414ms] ago, timed out [12405ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [109237] [2021-04-26T03:55:04,889][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11602ms] ago, timed out [1584ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [109292] [2021-04-26T03:55:04,898][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:55:04,988][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,006][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,095][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][1] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,110][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,208][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][2] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,303][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,321][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,396][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,491][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,588][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.7s] (37 delayed shards) [2021-04-26T03:55:05,595][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-04-26T03:55:05,694][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-04-26T03:56:04,430][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [Z6rccA86SOmiljHJjrmb9g] [2021-04-26T03:56:05,557][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [fsIQzDrCRSSF-FkeRM-VeQ] [2021-04-26T03:56:05,558][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [LuJQVzwgRZuxETkdPcFEpg] [2021-04-26T03:56:05,560][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [dserQr5iRB6Lic6m9prhow] [2021-04-26T03:56:16,572][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1302] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-26T03:56:25,743][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [tnSRJEJ3Q9W4MpXiRDXk_g] [2021-04-26T03:56:25,969][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [kcJWDFdFQqGeWtzBmpujoA] [2021-04-26T03:56:26,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [MFLrv325SuOKVQPtyDyF-g] [2021-04-26T03:56:26,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [jOdOzvtMQ0K5oiBP5AbH4g] [2021-04-26T03:56:28,818][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [tvOq73mOQNqP72PtvEKFQw] [2021-04-26T03:56:29,066][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [wisI5oqNQgCenGnUT4KCFA] [2021-04-26T03:56:30,023][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [YaYzPY01RrijCSBAFWPn5g] [2021-04-26T03:56:30,023][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [k5WHcdfbSzqRIB1k19iJmw] [2021-04-26T03:56:31,591][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [SLFTtNlcQs2uD7-z-FkVLA] [2021-04-26T03:56:31,891][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [XxICNb_kToKUfz4gWF7ogw] [2021-04-26T03:56:32,391][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [5tPpEyPnRUK7Rk_2jcGbIQ] [2021-04-26T03:56:33,421][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [kqN6lSGmR4OGZxA1-5etAg] [2021-04-26T03:56:33,721][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [7Okc4f5fSg6MVubQ_YN32w] [2021-04-26T03:56:34,926][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [bLO9TqxBREWnvw6O7BMsLA] [2021-04-26T03:56:34,927][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [SiTWJ-k3RKmGDIJts819ZA] [2021-04-26T03:56:35,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [Fr6jfSApTgmPDxDRgd63TQ] [2021-04-26T03:56:36,122][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [T_1FP_a0TNCikFVQ7KZ9eA] [2021-04-26T03:56:36,397][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [_PTgCGCXRrOzxGBQJii5xQ] [2021-04-26T03:56:37,393][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [nzpNuVpPTBWs4e8B-yjL4A] [2021-04-26T03:56:37,394][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [pYVzqHL1SsCVMY5G54Ivbw] [2021-04-26T03:56:38,205][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [5TYWJTZnQIWM8flNaKiibg] [2021-04-26T03:56:38,918][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [Thy8NgMyTfSODl87nI14FQ] [2021-04-26T03:56:39,216][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [AIQ9wCl3TXa5lDI0JRePaA] [2021-04-26T03:56:40,296][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [lkr_WIa0QAyKzZcC9wYmiw] [2021-04-26T03:56:40,296][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [roHTIxihRuqUlHgmcTlseg] [2021-04-26T03:56:41,297][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [gp7SVAU1QzmTBXwIjqRg8w] [2021-04-26T03:56:42,754][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [UP97KIi2SZqM2GtyPE22kA] [2021-04-26T03:56:43,103][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [Nw4vjDGrTGGWqh-o9aAlAw] [2021-04-26T03:56:43,894][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [iLLQvTp3SLaEtmjdf4dXug] [2021-04-26T03:56:43,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [c22VcTENTwyMbhmETr1q_A] [2021-04-26T03:56:44,673][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [T9r7Z9xqSaOqdGKYWg0jrA] [2021-04-26T03:56:44,902][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [GqaYQuCKRGSWc4I-HJThKQ] [2021-04-26T03:56:46,089][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [oP7P0kpqTR6deiLZFWVY5Q] [2021-04-26T03:56:46,488][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]). [2021-04-26T03:57:46,172][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22618ms] ago, timed out [12611ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [110857] [2021-04-26T03:57:46,173][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11608ms] ago, timed out [1601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [110904] [2021-04-26T04:02:22,584][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12212ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [112334] [2021-04-26T04:04:19,360][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10410ms] ago, timed out [401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [112925] [2021-04-26T04:05:04,469][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10009ms] ago, timed out [0ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [113135] [2021-04-26T04:05:26,436][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 1359, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:05:26,505][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1359, reason: Publication{term=4, version=1359} [2021-04-26T04:11:54,847][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 1360, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:12:04,852][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [1360] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T04:12:24,852][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1360, reason: Publication{term=4, version=1360} [2021-04-26T04:12:24,858][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [1360] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T04:13:42,564][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [27429ms] ago, timed out [17422ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [115376] [2021-04-26T04:13:42,567][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16421ms] ago, timed out [6405ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [115415] [2021-04-26T04:15:57,909][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 1361, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:15:58,081][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1361, reason: Publication{term=4, version=1361} [2021-04-26T04:18:56,842][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 1362, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:19:06,847][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1362] is still waiting for {dev-sdnrdb-master-2}{b2WbhbUSQN2vhY_7wRUxuA}{v4j-X7ECQ12w4rU7LQJ87A}{fd00:100:0:0:0:0:0:24d9}{[fd00:100::24d9]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-cdwcb}{nd_ZdWVISA-ASGCoxLpUYA}{gkNhrWK4Sais9PJMlMNkUg}{fd00:100:0:0:0:0:0:206e}{[fd00:100::206e]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T04:19:13,037][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15216ms] ago, timed out [5204ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}], id [116961] [2021-04-26T04:19:13,115][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1362, reason: Publication{term=4, version=1362} [2021-04-26T04:26:52,005][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 1363, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:26:52,107][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1363, reason: Publication{term=4, version=1363} [2021-04-26T04:31:55,292][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 1364, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:32:05,298][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [1364] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} [SENT_APPLY_COMMIT] [2021-04-26T04:32:09,900][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1364, reason: Publication{term=4, version=1364} [2021-04-26T04:33:10,145][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 1365, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:33:10,380][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1365, reason: Publication{term=4, version=1365} [2021-04-26T04:39:02,794][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 1366, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:39:04,964][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1366, reason: Publication{term=4, version=1366} [2021-04-26T04:54:38,694][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 1367, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T04:54:38,918][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1367, reason: Publication{term=4, version=1367} [2021-04-26T05:09:58,091][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r} join existing leader], term: 4, version: 1368, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}} [2021-04-26T05:09:58,516][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-kdkwd}{W-ycVbsFSIitXRFv1MN0Dw}{3n80UzgWT5q-62u9eaxn8w}{fd00:100:0:0:0:0:0:5da8}{[fd00:100::5da8]:9300}{r}}, term: 4, version: 1368, reason: Publication{term=4, version=1368} [2021-04-26T05:09:59,800][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1369, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:10:06,906][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1369, reason: Publication{term=4, version=1369} [2021-04-26T05:14:17,347][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T05:14:42,490][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1432, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:14:47,154][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1432, reason: Publication{term=4, version=1432} [2021-04-26T05:14:47,206][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,306][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,388][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,491][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,594][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,795][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,888][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-26T05:14:47,988][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-04-26T05:14:48,089][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-04-26T05:14:48,092][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [54.3s] (36 delayed shards) [2021-04-26T05:14:48,099][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-04-26T05:14:48,388][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-26T05:15:42,925][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [VwvbUXQrQGGDmLLVg89hKA] [2021-04-26T05:15:43,202][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [QwkU9_VkS4mpbE8AOtRrqA] [2021-04-26T05:15:43,590][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [vCpODUu6S9q0mQPCpraJYg] [2021-04-26T05:15:43,592][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [sfMLDUVKR-6kU4qwmZ7zGQ] [2021-04-26T05:15:45,588][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [mFLZTxaoRXuNYJLkhH8_fg] [2021-04-26T05:15:45,957][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [QQz17rJUTS65nrwbb8D43g] [2021-04-26T05:15:45,988][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [BRITO7BKRYSWm1JUNj5E4A] [2021-04-26T05:15:45,988][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [pZ0s63ksR0yiH2f4d8MGvA] [2021-04-26T05:15:47,520][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [pCaRZEHcRCyYSP_Yuv0IFw] [2021-04-26T05:15:47,842][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [V2b7aPO0TSevnr8zGAatGw] [2021-04-26T05:15:47,842][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [OjYgNAbiRe2GETKGdaDxaQ] [2021-04-26T05:15:48,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [NzbubuxwRwSu4LUOetusOw] [2021-04-26T05:15:49,420][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [AtIHF2FdR2OF79t9IjvgoQ] [2021-04-26T05:15:49,706][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [s264E72zTFeMSl-EGy1jvw] [2021-04-26T05:15:50,862][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [YY-SesbeSsCluPVFRuDyuw] [2021-04-26T05:15:50,888][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [tDRWV5ZiQWGcwQ3vEzQXfA] [2021-04-26T05:15:51,388][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [bo3AYhNtRYmF1NMHe9HszQ] [2021-04-26T05:15:52,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [DQUSrZkFQWy9EsViLIweJA] [2021-04-26T05:15:52,622][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][1] marking unavailable shards as stale: [v3DLvfJkRV2QCgHXRHD92w] [2021-04-26T05:15:54,029][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [Zo36c44mSuyrFd2xmK324A] [2021-04-26T05:15:54,029][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [SLDRoCbOTtKznAY4i6furA] [2021-04-26T05:15:54,226][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [1NuF3R2HRdSzWVMzoQIPug] [2021-04-26T05:15:55,342][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [crivf7MtQj6Mc6ZiDltfQg] [2021-04-26T05:15:56,220][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [k_BBEWNLSGC-3CbmBSvuuQ] [2021-04-26T05:15:56,220][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [WBgU_iTwSxm0YlaFZHjSsA] [2021-04-26T05:15:56,707][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [UMWAfAV8T9WA4hNbn3BK6w] [2021-04-26T05:15:58,064][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [38_wTZEfRiizJc2tbaKWiw] [2021-04-26T05:15:58,318][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [TZm0CI1GTn2S6BZTRIdd8w] [2021-04-26T05:15:59,484][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [s5QXVTKVRrmEJKPuO_LObg] [2021-04-26T05:15:59,484][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [bCRz6ACER760oXRfXE_-vg] [2021-04-26T05:16:00,788][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][1] marking unavailable shards as stale: [8a-chuEqSVyOAs7mo95YxA] [2021-04-26T05:16:09,743][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [3ySp3Pe8QSeWqYfVdNrwZA] [2021-04-26T05:16:10,201][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [xLBymgEeR7SS8uOrhf9yzg] [2021-04-26T05:16:11,001][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [BatL5-5RQmqKm_kfeQUG6g] [2021-04-26T05:16:11,002][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [LvRY_XryQS2-F4QyQjOyog] [2021-04-26T05:16:11,861][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [fFPTA5i0ScipiCdc67v5uA] [2021-04-26T05:16:12,169][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][4]]]). [2021-04-26T05:16:15,516][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1492, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:16:25,519][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1492] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T05:16:45,520][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1492, reason: Publication{term=4, version=1492} [2021-04-26T05:16:45,523][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1492] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T05:16:55,527][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1493] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T05:17:03,317][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-26T05:17:14,899][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [34078ms] ago, timed out [24026ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [135174] [2021-04-26T05:17:14,922][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [45287ms] ago, timed out [35279ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [135109] [2021-04-26T05:17:14,922][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [56297ms] ago, timed out [46288ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [135057] [2021-04-26T05:17:14,940][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26629ms] ago, timed out [11616ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [135233] [2021-04-26T05:17:14,940][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [29475ms] ago, timed out [14419ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [135210] [2021-04-26T05:17:15,549][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1493] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-04-26T05:17:15,552][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 1494, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:17:15,629][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1494, reason: Publication{term=4, version=1494} [2021-04-26T05:17:15,645][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-1][[fd00:100::5daa]:9300][internal:cluster/coordination/join/validate] disconnected [2021-04-26T05:17:21,337][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} join existing leader], term: 4, version: 1495, delta: added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:17:31,339][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1495] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T05:17:38,917][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1495, reason: Publication{term=4, version=1495} [2021-04-26T05:17:48,925][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [1496] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T05:18:08,950][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [1496] is still waiting for {dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-26T05:19:33,210][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15813ms] ago, timed out [5804ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}], id [136172] [2021-04-26T05:19:38,950][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-0] node [{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}] is lagging at cluster state version [1495], although publication of cluster state version [1496] completed [1.5m] ago [2021-04-26T05:19:38,956][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr} reason: lagging], term: 4, version: 1497, delta: removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}} [2021-04-26T05:19:39,558][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-1}{IeNHB3AjSESvvbc3slnMCg}{ivncIJVeShmgo2v0UlU9Sg}{fd00:100:0:0:0:0:0:5daa}{[fd00:100::5daa]:9300}{dmr}}, term: 4, version: 1497, reason: Publication{term=4, version=1497}