Results

By type

           10:55:35.68 
 10:55:35.69 Welcome to the Bitnami elasticsearch container
 10:55:35.78 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch
 10:55:35.79 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues
 10:55:35.80 
 10:55:35.88 INFO  ==> ** Starting Elasticsearch setup **
 10:55:36.29 INFO  ==> Configuring/Initializing Elasticsearch...
 10:55:36.79 INFO  ==> Setting default configuration
 10:55:36.89 INFO  ==> Configuring Elasticsearch cluster settings...
 10:55:37.18 WARN  ==> Found more than one IP address associated to hostname dev-sdnrdb-master-1: fd00:100::520f 10.242.82.15, will use fd00:100::520f
 10:55:37.44 WARN  ==> Found more than one IP address associated to hostname dev-sdnrdb-master-1: fd00:100::520f 10.242.82.15, will use fd00:100::520f
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
 10:55:58.50 INFO  ==> ** Elasticsearch setup finished! **

 10:55:58.78 INFO  ==> ** Starting Elasticsearch **
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2021-05-07T10:56:38,481][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS]
[2021-05-07T10:56:38,485][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] JVM home [/opt/bitnami/java]
[2021-05-07T10:56:38,486][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-10431963527390529339, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true]
[2021-05-07T10:56:54,784][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [aggs-matrix-stats]
[2021-05-07T10:56:54,786][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [analysis-common]
[2021-05-07T10:56:54,786][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [geo]
[2021-05-07T10:56:54,787][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [ingest-common]
[2021-05-07T10:56:54,787][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [ingest-geoip]
[2021-05-07T10:56:54,787][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [ingest-user-agent]
[2021-05-07T10:56:54,788][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [kibana]
[2021-05-07T10:56:54,788][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [lang-expression]
[2021-05-07T10:56:54,788][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [lang-mustache]
[2021-05-07T10:56:54,789][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [lang-painless]
[2021-05-07T10:56:54,789][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [mapper-extras]
[2021-05-07T10:56:54,790][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [parent-join]
[2021-05-07T10:56:54,790][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [percolator]
[2021-05-07T10:56:54,790][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [rank-eval]
[2021-05-07T10:56:54,790][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [reindex]
[2021-05-07T10:56:54,791][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [repository-url]
[2021-05-07T10:56:55,833][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [tasks]
[2021-05-07T10:56:55,834][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded module [transport-netty4]
[2021-05-07T10:56:55,835][INFO ][o.e.p.PluginsService     ] [dev-sdnrdb-master-1] loaded plugin [repository-s3]
[2021-05-07T10:56:56,384][INFO ][o.e.e.NodeEnvironment    ] [dev-sdnrdb-master-1] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.203:/dockerdata-nfs/dev/elastic-master-1)]], net usable_space [179.4gb], net total_space [195.8gb], types [nfs4]
[2021-05-07T10:56:56,385][INFO ][o.e.e.NodeEnvironment    ] [dev-sdnrdb-master-1] heap size [123.7mb], compressed ordinary object pointers [true]
[2021-05-07T10:56:56,984][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] node name [dev-sdnrdb-master-1], node ID [p1mQuCjwQb6rg6KjOXT2Mw], cluster name [sdnrdb-cluster]
[2021-05-07T10:57:44,582][INFO ][o.e.t.NettyAllocator     ] [dev-sdnrdb-master-1] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}]
[2021-05-07T10:57:45,487][INFO ][o.e.d.DiscoveryModule    ] [dev-sdnrdb-master-1] using discovery type [zen] and seed hosts providers [settings]
[2021-05-07T10:57:49,787][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2021-05-07T10:57:52,183][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] initialized
[2021-05-07T10:57:52,183][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] starting ...
[2021-05-07T10:57:53,590][INFO ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] publish_address {[fd00:100::520f]:9300}, bound_addresses {[::]:9300}
[2021-05-07T10:57:55,090][WARN ][o.e.t.TcpTransport       ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.82.15:9300, remoteAddress=/10.242.208.82:54400}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T10:57:55,480][WARN ][o.e.t.TcpTransport       ] [dev-sdnrdb-master-1] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.242.82.15:9300, remoteAddress=/10.242.208.82:54412}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T10:57:55,991][INFO ][o.e.b.BootstrapChecks    ] [dev-sdnrdb-master-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-05-07T10:57:58,186][INFO ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] setting initial configuration to VotingConfiguration{3Xx5SbxeTmKeLhGAO6eL_w,p1mQuCjwQb6rg6KjOXT2Mw,{bootstrap-placeholder}-dev-sdnrdb-master-2}
[2021-05-07T10:58:00,382][INFO ][o.e.c.c.JoinHelper       ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::d052]:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 1 does not match current term 2
	at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T10:58:01,281][INFO ][o.e.c.c.JoinHelper       ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, minimumTerm=1, optionalJoin=Optional[Join{term=2, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, targetNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-1][[fd00:100::520f]:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}
	at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T10:58:01,391][INFO ][o.e.c.c.JoinHelper       ] [dev-sdnrdb-master-1] failed to join {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}, targetNode={dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-1][[fd00:100::520f]:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}
	at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T10:58:01,981][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr} elect leader, {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 4, version: 1, delta: master node changed {previous [], current [{dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}]}, added {{dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr}}
[2021-05-07T10:58:03,580][INFO ][o.e.c.c.CoordinationState] [dev-sdnrdb-master-1] cluster UUID set to [oTecjjroR8ikJTGrM22Mpw]
[2021-05-07T10:58:03,690][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-1] [gc][11] overhead, spent [320ms] collecting in the last [1s]
[2021-05-07T10:58:04,094][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] master node changed {previous [], current [{dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}]}, added {{dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr}}, term: 4, version: 1, reason: Publication{term=4, version=1}
[2021-05-07T10:58:04,283][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-1] publish_address {[fd00:100::520f]:9200}, bound_addresses {[::]:9200}
[2021-05-07T10:58:04,284][INFO ][o.e.n.Node               ] [dev-sdnrdb-master-1] started
[2021-05-07T10:58:04,793][INFO ][o.e.g.GatewayService     ] [dev-sdnrdb-master-1] recovered [0] indices into cluster_state
[2021-05-07T10:58:09,691][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 3, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T10:58:10,088][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 3, reason: Publication{term=4, version=3}
[2021-05-07T11:00:19,707][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 4, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:00:21,505][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 4, reason: Publication{term=4, version=4}
[2021-05-07T11:00:42,809][INFO ][o.e.c.s.ClusterSettings  ] [dev-sdnrdb-master-1] updating [action.auto_create_index] from [true] to [false]
[2021-05-07T11:00:45,381][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [historicalperformance24h-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:00:59,282][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [connectionlog-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:04,288][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [faultcurrent-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:10,016][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [mediator-server-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:14,088][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [maintenancemode-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:20,804][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v5][4], [maintenancemode-v5][3], [maintenancemode-v5][0], [maintenancemode-v5][1], [maintenancemode-v5][2]]]).
[2021-05-07T11:01:21,195][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [inventoryequipment-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:25,810][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [historicalperformance15min-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:29,915][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance15min-v5][1], [historicalperformance15min-v5][2]]]).
[2021-05-07T11:01:30,307][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [networkelement-connection-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:34,783][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [guicutthrough-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:38,355][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [eventlog-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:42,885][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-1] [faultlog-v5] creating index, cause [api], templates [], shards [5]/[1]
[2021-05-07T11:01:47,244][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][4], [faultlog-v5][1], [faultlog-v5][0], [faultlog-v5][3]]]).
[2021-05-07T11:13:19,419][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:13:21,625][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10211ms] ago, timed out [201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [4386]
[2021-05-07T11:13:31,056][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [20620ms] ago, timed out [10608ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [4380]
[2021-05-07T11:13:43,980][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 74, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:13:53,986][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [74] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:14:12,586][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [68135ms] ago, timed out [53114ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [4353]
[2021-05-07T11:14:13,988][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 74, reason: Publication{term=4, version=74}
[2021-05-07T11:14:14,384][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [guicutthrough-v5][0] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,386][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [guicutthrough-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,485][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,505][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [inventoryequipment-v5][0] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,605][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [connectionlog-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,693][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [connectionlog-v5][1] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,705][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultlog-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,889][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultlog-v5][1] primary-replica resync completed with 0 operations
[2021-05-07T11:14:14,993][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T11:14:15,081][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [networkelement-connection-v5][1] primary-replica resync completed with 0 operations
[2021-05-07T11:14:15,085][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultcurrent-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T11:14:15,188][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultcurrent-v5][0] primary-replica resync completed with 0 operations
[2021-05-07T11:14:15,199][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [28.3s] (36 delayed shards)
[2021-05-07T11:14:15,284][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [maintenancemode-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T11:14:15,285][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [31.2s] publication of cluster state version [74] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:14:15,480][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [maintenancemode-v5][1] primary-replica resync completed with 0 operations
[2021-05-07T11:14:25,959][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [25885ms] ago, timed out [15875ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [4609]
[2021-05-07T11:14:25,962][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [14874ms] ago, timed out [4804ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [4637]
[2021-05-07T11:14:44,435][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][2] marking unavailable shards as stale: [BxlA3vucTOeSUjnMONXkGw]
[2021-05-07T11:14:45,644][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][1] marking unavailable shards as stale: [8po1g5EKSx2WC-oaR-h_7w]
[2021-05-07T11:14:45,645][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][4] marking unavailable shards as stale: [rMrEqseNQS2_KOHoqUZApQ]
[2021-05-07T11:14:45,682][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][0] marking unavailable shards as stale: [NfTUAIGMTWKnFqbvM7sUGQ]
[2021-05-07T11:14:48,188][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][1] marking unavailable shards as stale: [tzAJILUWQPicLGSdEqp4BA]
[2021-05-07T11:14:48,842][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][2] marking unavailable shards as stale: [-bgWX-YaT3iDSYkL3ys0kQ]
[2021-05-07T11:14:48,843][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][0] marking unavailable shards as stale: [7xCtYkcpRUyuwOV9zsQ6IA]
[2021-05-07T11:14:48,844][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][1] marking unavailable shards as stale: [cMCScbDyRgmTdlpvinP_fg]
[2021-05-07T11:14:51,823][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][2] marking unavailable shards as stale: [rbgf9xCCSgaNebz5jFPoqg]
[2021-05-07T11:14:52,204][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][4] marking unavailable shards as stale: [4T6bjd9fTI29SlRNgs5RZw]
[2021-05-07T11:14:52,205][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][1] marking unavailable shards as stale: [C_V-2ExgTuKAauVVpTzMww]
[2021-05-07T11:14:53,700][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][3] marking unavailable shards as stale: [DFaz7ZPRQuy4ZuUwcB34xg]
[2021-05-07T11:14:54,690][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][0] marking unavailable shards as stale: [GqkYz89nRa-XuCAoiULWMw]
[2021-05-07T11:14:56,753][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][2] marking unavailable shards as stale: [6qJ442lGTw-pK9MnZPpAtg]
[2021-05-07T11:14:56,753][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] marking unavailable shards as stale: [vXpoeelhSoubTAJ4FofKSQ]
[2021-05-07T11:14:58,516][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][1] marking unavailable shards as stale: [Kf7s-vSwQs66h-vgV31azw]
[2021-05-07T11:14:59,583][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][1] marking unavailable shards as stale: [wzBlbMhuQfanPxTU-fw-YA]
[2021-05-07T11:15:00,635][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][0] marking unavailable shards as stale: [g5IF2TyjTkOAOPecaGivoQ]
[2021-05-07T11:15:00,635][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][4] marking unavailable shards as stale: [SUfWBdhpRGiRd4wll9OWtw]
[2021-05-07T11:15:02,121][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][2] marking unavailable shards as stale: [-a3NnYrcQ1CX6YpwuN-VCA]
[2021-05-07T11:15:04,475][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][3] marking unavailable shards as stale: [6rsNwJhTShKVXy48SBp7FQ]
[2021-05-07T11:15:05,205][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][1] marking unavailable shards as stale: [CNxexfqiSTWDn1BXeKUPTQ]
[2021-05-07T11:15:06,326][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][2] marking unavailable shards as stale: [7CqtW1J7Twu89uIa-elpbw]
[2021-05-07T11:15:06,326][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][0] marking unavailable shards as stale: [cU-wNb4TTGWAywrwq1ZS1w]
[2021-05-07T11:15:07,635][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][4] marking unavailable shards as stale: [367J6g6xQCCGHAdD75A0Xg]
[2021-05-07T11:15:08,505][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][4] marking unavailable shards as stale: [DUVD6gT4T-u7wfBq-6Q6_g]
[2021-05-07T11:15:11,234][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][1] marking unavailable shards as stale: [VM764zGsTQqxXNCX398uuA]
[2021-05-07T11:15:11,234][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][1] marking unavailable shards as stale: [6Dbp5azlQPqBvA-pv6mabQ]
[2021-05-07T11:15:16,480][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][2] marking unavailable shards as stale: [7jrUFMjqSl6ORsyLrHRi5Q]
[2021-05-07T11:15:17,732][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][0] marking unavailable shards as stale: [Dkuj0sP9SwOxR7VquD9gxg]
[2021-05-07T11:15:18,491][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 120, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:15:28,495][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [120] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:15:38,600][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [18012ms] ago, timed out [8004ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [5391]
[2021-05-07T11:15:48,496][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 120, reason: Publication{term=4, version=120}
[2021-05-07T11:15:48,502][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [120] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:15:58,582][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [121] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:16:18,586][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [121] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:16:18,588][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][2] marking unavailable shards as stale: [OObexzOpSfaL-eUeOrrdEw]
[2021-05-07T11:16:18,588][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][0] marking unavailable shards as stale: [s8i3IlZLTay1SGVL0aDn1Q]
[2021-05-07T11:16:18,589][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][3] marking unavailable shards as stale: [ADirmmtsTfK-mnSIf_3YAQ]
[2021-05-07T11:16:28,594][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [122] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:16:48,596][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [122] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:16:49,283][WARN ][o.e.i.c.IndicesClusterStateService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][0] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [historicalperformance24h-v5][0]: Recovery failed from {dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr} into {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::d052]:9300][internal:index/shard/recovery/start_recovery]
Caused by: java.lang.IllegalStateException: no local checkpoint tracking information available
	at org.elasticsearch.index.seqno.ReplicationTracker.initiateTracking(ReplicationTracker.java:1158) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.index.shard.IndexShard.initiateTracking(IndexShard.java:2299) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$13(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$14(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$30(RecoverySourceHandler.java:648) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) ~[?:?]
[2021-05-07T11:16:49,387][WARN ][o.e.i.c.IndicesClusterStateService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][2] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [historicalperformance24h-v5][2]: Recovery failed from {dev-sdnrdb-master-0}{3Xx5SbxeTmKeLhGAO6eL_w}{4J8n6sUMQjGsiN3l3Dlq1Q}{fd00:100:0:0:0:0:0:d052}{[fd00:100::d052]:9300}{dmr} into {dev-sdnrdb-master-1}{p1mQuCjwQb6rg6KjOXT2Mw}{Qso9ZSQySjWaf56U2Y48jg}{fd00:100:0:0:0:0:0:520f}{[fd00:100::520f]:9300}{dmr}
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-0][[fd00:100::d052]:9300][internal:index/shard/recovery/start_recovery]
Caused by: java.lang.IllegalStateException: no local checkpoint tracking information available
	at org.elasticsearch.index.seqno.ReplicationTracker.initiateTracking(ReplicationTracker.java:1158) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.index.shard.IndexShard.initiateTracking(IndexShard.java:2299) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$13(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$recoverToTarget$14(RecoverySourceHandler.java:310) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$30(RecoverySourceHandler.java:648) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) ~[?:?]
[2021-05-07T11:16:58,680][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [123] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:17:18,684][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [123] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:17:48,587][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [120], although publication of cluster state version [121] completed [1.5m] ago
[2021-05-07T11:17:48,595][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 124, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:17:49,498][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 124, reason: Publication{term=4, version=124}
[2021-05-07T11:17:49,883][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [58.7s] (2 delayed shards)
[2021-05-07T11:17:51,453][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][1] marking unavailable shards as stale: [LvV_y1lpSP2W3_I29wwXrA]
[2021-05-07T11:18:58,602][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [129] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:19:17,671][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [19417ms] ago, timed out [9408ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [6523]
[2021-05-07T11:19:18,233][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][1] marking unavailable shards as stale: [77tKPNhmQHOptij_Sqs4tg]
[2021-05-07T11:19:19,004][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][4] marking unavailable shards as stale: [iGkZiSfpS2SAXgLcFn6yCA]
[2021-05-07T11:19:20,139][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[connectionlog-v5][4]]]).
[2021-05-07T11:19:36,209][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 134, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:19:46,214][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [134] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:20:06,214][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 134, reason: Publication{term=4, version=134}
[2021-05-07T11:20:06,217][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [134] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:20:16,230][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [135] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:20:32,544][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [44051ms] ago, timed out [34040ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [6778]
[2021-05-07T11:20:32,547][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [33039ms] ago, timed out [23033ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [6819]
[2021-05-07T11:20:32,547][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [55059ms] ago, timed out [45052ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [6731]
[2021-05-07T11:20:36,264][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [135] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:20:36,270][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 136, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:20:37,751][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 136, reason: Publication{term=4, version=136}
[2021-05-07T11:20:37,779][WARN ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}]
org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][internal:cluster/coordination/join/validate] disconnected
[2021-05-07T11:21:20,411][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 137, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:21:30,415][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [137] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:21:50,417][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 137, reason: Publication{term=4, version=137}
[2021-05-07T11:21:50,429][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [137] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:22:00,496][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [138] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:22:07,510][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [17090ms] ago, timed out [2001ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [7335]
[2021-05-07T11:22:20,522][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30.1s] publication of cluster state version [138] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:22:20,527][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 139, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:22:21,026][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 139, reason: Publication{term=4, version=139}
[2021-05-07T11:22:47,002][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 140, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:22:57,006][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [140] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:22:59,904][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 140, reason: Publication{term=4, version=140}
[2021-05-07T11:23:49,376][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [171] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:24:09,384][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [171] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:24:19,391][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [172] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:24:38,212][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:24:39,424][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [172] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:24:39,495][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 173, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:24:39,585][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [46266ms] ago, timed out [36257ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [8334]
[2021-05-07T11:24:39,586][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [35257ms] ago, timed out [25420ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [8371]
[2021-05-07T11:24:39,587][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [24420ms] ago, timed out [14412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [8437]
[2021-05-07T11:24:41,089][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 173, reason: Publication{term=4, version=173}
[2021-05-07T11:24:41,197][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [eventlog-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T11:24:41,284][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [58.1s] (20 delayed shards)
[2021-05-07T11:24:41,289][WARN ][o.e.i.s.RetentionLeaseSyncAction] [dev-sdnrdb-master-1] [[mediator-server-v5][4]] failed to perform indices:admin/seq_no/retention_lease_sync on replica [mediator-server-v5][4], node[7ZVQCWTJRt6RtmOTGcarLA], [R], s[STARTED], a[id=9KqtvqN3RyG-RSeNeFPmBg]
org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][indices:admin/seq_no/retention_lease_sync[r]] disconnected
[2021-05-07T11:24:41,381][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T11:24:41,381][WARN ][o.e.i.s.RetentionLeaseSyncAction] [dev-sdnrdb-master-1] [[historicalperformance24h-v5][4]] failed to perform indices:admin/seq_no/retention_lease_sync on replica [historicalperformance24h-v5][4], node[7ZVQCWTJRt6RtmOTGcarLA], [R], s[STARTED], a[id=kZeZZWikQh2VhVlL8F8KCg]
org.elasticsearch.client.transport.NoNodeAvailableException: unknown node [7ZVQCWTJRt6RtmOTGcarLA]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1084) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.replication.ReplicationOperation$3.tryAction(ReplicationOperation.java:244) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.RetryableAction$1.doRun(RetryableAction.java:99) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
	Suppressed: org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][indices:admin/seq_no/retention_lease_sync[r]] disconnected
[2021-05-07T11:24:41,480][WARN ][o.e.i.s.RetentionLeaseSyncAction] [dev-sdnrdb-master-1] [[faultlog-v5][4]] failed to perform indices:admin/seq_no/retention_lease_sync on replica [faultlog-v5][4], node[7ZVQCWTJRt6RtmOTGcarLA], [R], s[STARTED], a[id=dDpuLeX2QmOx__0pAMsFZQ]
org.elasticsearch.client.transport.NoNodeAvailableException: unknown node [7ZVQCWTJRt6RtmOTGcarLA]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1084) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.replication.ReplicationOperation$3.tryAction(ReplicationOperation.java:244) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.support.RetryableAction$1.doRun(RetryableAction.java:99) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
	Suppressed: org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][indices:admin/seq_no/retention_lease_sync[r]] disconnected
[2021-05-07T11:24:41,481][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultcurrent-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T11:24:41,709][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][4] marking unavailable shards as stale: [dDpuLeX2QmOx__0pAMsFZQ]
[2021-05-07T11:24:41,710][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][4] marking unavailable shards as stale: [9KqtvqN3RyG-RSeNeFPmBg]
[2021-05-07T11:24:41,710][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][3] marking unavailable shards as stale: [-EL-HaFaStmkOZFiNqeI0w]
[2021-05-07T11:24:41,711][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][4] marking unavailable shards as stale: [kZeZZWikQh2VhVlL8F8KCg]
[2021-05-07T11:25:42,009][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][4] marking unavailable shards as stale: [UKEyeZ5uSCC3VhfndTIvvA]
[2021-05-07T11:25:42,539][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][4] marking unavailable shards as stale: [6NEYOrR5T6mRfCLM1_BVDg]
[2021-05-07T11:25:42,539][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][3] marking unavailable shards as stale: [YjhcoGm0Ry-m0TOPgABcHg]
[2021-05-07T11:25:45,073][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] marking unavailable shards as stale: [-MtlOxSLRDyrkvU-2-_YeQ]
[2021-05-07T11:25:45,310][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][3] marking unavailable shards as stale: [SfNc4PpxS1uD3EtM0Qjpmg]
[2021-05-07T11:25:47,498][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][4] marking unavailable shards as stale: [xr1JLHNXT9CcVAyrvoicEg]
[2021-05-07T11:25:47,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][3] marking unavailable shards as stale: [_i5L0J-pSmy8ExyqgNcQ4Q]
[2021-05-07T11:25:48,103][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][4] marking unavailable shards as stale: [eVa6ospsRNGiDY630ynqpw]
[2021-05-07T11:25:53,404][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] marking unavailable shards as stale: [RIe7KGIZR5CrAbl56FF_GA]
[2021-05-07T11:25:53,991][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][3] marking unavailable shards as stale: [-BsDCQ67QBKtbLoGahKdrQ]
[2021-05-07T11:25:55,649][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][3] marking unavailable shards as stale: [Q48hto9zSv-owJ3RDkDL6Q]
[2021-05-07T11:25:55,649][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][4] marking unavailable shards as stale: [Bz9x2mGXQ9-0sKdkU3DVmw]
[2021-05-07T11:25:59,680][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][4] marking unavailable shards as stale: [yMG-4F2-Rg26fFL_dTl_bA]
[2021-05-07T11:26:00,113][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][4] marking unavailable shards as stale: [VRdgyU7bTUSO9hEiO3Y4yA]
[2021-05-07T11:26:01,311][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][3] marking unavailable shards as stale: [MjJ8xPXxTYC2A_6Ho75q1w]
[2021-05-07T11:26:02,383][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance24h-v5][4]]]).
[2021-05-07T11:26:03,952][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 205, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:26:13,955][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [205] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:26:33,956][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 205, reason: Publication{term=4, version=205}
[2021-05-07T11:26:33,959][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [205] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:26:43,965][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [206] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:27:01,414][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [23216ms] ago, timed out [8206ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [9350]
[2021-05-07T11:27:03,990][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [206] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:27:03,992][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 207, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:27:04,287][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 207, reason: Publication{term=4, version=207}
[2021-05-07T11:28:14,408][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 208, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:28:24,412][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [208] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:28:28,862][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 208, reason: Publication{term=4, version=208}
[2021-05-07T11:28:38,868][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [209] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:28:58,898][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [209] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:28:58,908][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [connectionlog-v5][4] unexpected failure while failing shard [shard id [[connectionlog-v5][4]], allocation id [zMGi9fjnSCuDhRzLgoUaaA], primary term [2], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [connectionlog-v5][4], node[3Xx5SbxeTmKeLhGAO6eL_w], [R], s[STARTED], a[id=zMGi9fjnSCuDhRzLgoUaaA]], failure [RemoteTransportException[[dev-sdnrdb-master-0][[fd00:100::d052]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[connectionlog-v5][4] operation primary term [2] is too old (current [3])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [2] did not match current primary term [3]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:28:58,912][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [eventlog-v5][4] unexpected failure while failing shard [shard id [[eventlog-v5][4]], allocation id [63zHZSe9S5-rYLEmjygPNg], primary term [1], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [eventlog-v5][4], node[3Xx5SbxeTmKeLhGAO6eL_w], [R], s[STARTED], a[id=63zHZSe9S5-rYLEmjygPNg]], failure [RemoteTransportException[[dev-sdnrdb-master-0][[fd00:100::d052]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[eventlog-v5][4] operation primary term [1] is too old (current [2])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [1] did not match current primary term [2]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:28:58,913][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [faultcurrent-v5][4] unexpected failure while failing shard [shard id [[faultcurrent-v5][4]], allocation id [pamsZ_g-TvSNymjnE_WuvA], primary term [1], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [faultcurrent-v5][4], node[p1mQuCjwQb6rg6KjOXT2Mw], [R], s[STARTED], a[id=pamsZ_g-TvSNymjnE_WuvA]], failure [RemoteTransportException[[dev-sdnrdb-master-1][[fd00:100::520f]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[faultcurrent-v5][4] operation primary term [1] is too old (current [2])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [1] did not match current primary term [2]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:28:58,914][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [eventlog-v5][3] unexpected failure while failing shard [shard id [[eventlog-v5][3]], allocation id [W1sTC-B3TaeaAM-p8oTqDQ], primary term [1], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [eventlog-v5][3], node[p1mQuCjwQb6rg6KjOXT2Mw], [R], s[STARTED], a[id=W1sTC-B3TaeaAM-p8oTqDQ]], failure [RemoteTransportException[[dev-sdnrdb-master-1][[fd00:100::520f]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[eventlog-v5][3] operation primary term [1] is too old (current [2])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [1] did not match current primary term [2]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:28:58,915][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [maintenancemode-v5][4] unexpected failure while failing shard [shard id [[maintenancemode-v5][4]], allocation id [oCHEYXyaS4-CuTNNpHAE5g], primary term [2], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [maintenancemode-v5][4], node[3Xx5SbxeTmKeLhGAO6eL_w], [R], s[STARTED], a[id=oCHEYXyaS4-CuTNNpHAE5g]], failure [RemoteTransportException[[dev-sdnrdb-master-0][[fd00:100::d052]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[maintenancemode-v5][4] operation primary term [2] is too old (current [3])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [2] did not match current primary term [3]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:28:58,980][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] unexpected failure while failing shard [shard id [[historicalperformance15min-v5][3]], allocation id [nNZ7djbeQ-6p5Z56iiMLPA], primary term [1], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [historicalperformance15min-v5][3], node[p1mQuCjwQb6rg6KjOXT2Mw], [R], s[STARTED], a[id=nNZ7djbeQ-6p5Z56iiMLPA]], failure [RemoteTransportException[[dev-sdnrdb-master-1][[fd00:100::520f]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[historicalperformance15min-v5][3] operation primary term [1] is too old (current [2])]; ], markAsStale [true]]
org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [1] did not match current primary term [2]
	at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T11:29:12,200][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13009ms] ago, timed out [3002ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [9982]
[2021-05-07T11:30:28,900][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [208], although publication of cluster state version [209] completed [1.5m] ago
[2021-05-07T11:30:28,913][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 210, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:30:29,487][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 210, reason: Publication{term=4, version=210}
[2021-05-07T11:31:27,849][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16652ms] ago, timed out [6619ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [10598]
[2021-05-07T11:32:06,110][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 211, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:32:16,116][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [211] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:32:26,299][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16627ms] ago, timed out [6609ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [10831]
[2021-05-07T11:32:36,118][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 211, reason: Publication{term=4, version=211}
[2021-05-07T11:32:36,125][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [211] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:32:46,281][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [212] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:33:06,308][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [212] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:33:13,172][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16413ms] ago, timed out [6406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [11058]
[2021-05-07T11:34:06,125][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [211] completed [1.5m] ago
[2021-05-07T11:34:06,132][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 213, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:34:13,519][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 213, reason: Publication{term=4, version=213}
[2021-05-07T11:34:43,828][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [18435ms] ago, timed out [8429ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [11469]
[2021-05-07T11:34:45,502][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 214, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:34:48,730][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 214, reason: Publication{term=4, version=214}
[2021-05-07T11:35:25,435][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [216] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:35:29,123][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15424ms] ago, timed out [403ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [11722]
[2021-05-07T11:35:45,446][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [216] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:35:55,451][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [217] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:36:26,069][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [222] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:36:43,113][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-dsbqc}{ZOO0sM7MRdSRCgZZCbJ91Q}{fnTvhH1wQxGwbtQRgFc1vg}{fd00:100:0:0:0:0:0:ca9e}{[fd00:100::ca9e]:9300}{r} join existing leader], term: 4, version: 223, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-dsbqc}{ZOO0sM7MRdSRCgZZCbJ91Q}{fnTvhH1wQxGwbtQRgFc1vg}{fd00:100:0:0:0:0:0:ca9e}{[fd00:100::ca9e]:9300}{r}}
[2021-05-07T11:36:47,978][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-dsbqc}{ZOO0sM7MRdSRCgZZCbJ91Q}{fnTvhH1wQxGwbtQRgFc1vg}{fd00:100:0:0:0:0:0:ca9e}{[fd00:100::ca9e]:9300}{r}}, term: 4, version: 223, reason: Publication{term=4, version=223}
[2021-05-07T11:36:57,984][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [224] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:37:26,864][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [225] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:37:46,902][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [225] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:38:08,728][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:39:45,942][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15618ms] ago, timed out [5604ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [13267]
[2021-05-07T11:41:09,159][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10612ms] ago, timed out [600ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [13757]
[2021-05-07T11:41:11,530][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [25227ms] ago, timed out [10212ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [13693]
[2021-05-07T11:41:46,236][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:42:00,228][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [29031ms] ago, timed out [14012ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [13946]
[2021-05-07T11:42:58,789][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15810ms] ago, timed out [5803ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [14360]
[2021-05-07T11:43:09,492][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [226] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:43:29,505][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [226] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:43:29,640][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [20015ms] ago, timed out [10009ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [14517]
[2021-05-07T11:43:39,514][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [227] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:43:58,430][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [228] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:44:18,432][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [228] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:44:28,436][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [229] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:44:48,465][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [229] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:45:50,746][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15894ms] ago, timed out [6006ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [15394]
[2021-05-07T11:46:45,297][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [230] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:47:05,307][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [230] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:47:15,317][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [231] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:47:16,310][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [23032ms] ago, timed out [13023ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [15846]
[2021-05-07T11:47:16,311][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12023ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [15902]
[2021-05-07T11:47:18,116][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:47:20,390][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [17226ms] ago, timed out [2201ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [15893]
[2021-05-07T11:47:35,321][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [231] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:47:45,327][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [232] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:48:05,338][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [232] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:48:15,382][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [233] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:48:33,522][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:48:33,615][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15229ms] ago, timed out [200ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [16372]
[2021-05-07T11:48:35,308][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [229], although publication of cluster state version [230] completed [1.5m] ago
[2021-05-07T11:48:35,414][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [233] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:48:35,482][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 234, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:48:37,707][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 234, reason: Publication{term=4, version=234}
[2021-05-07T11:48:37,729][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [57.6s] (9 delayed shards)
[2021-05-07T11:48:37,985][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][4] marking unavailable shards as stale: [-_sqZ54-SaujoC9xWG8PzQ]
[2021-05-07T11:49:45,422][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [239] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:50:00,325][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][4] marking unavailable shards as stale: [HUXKpu4FQQeU-KWqNP2imw]
[2021-05-07T11:50:10,119][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][3] marking unavailable shards as stale: [i2kbmcSNSgO-HVUrTRZl9A]
[2021-05-07T11:50:10,120][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][4] marking unavailable shards as stale: [PXnXF_DoQk26thGNg7zA7g]
[2021-05-07T11:50:10,121][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][4] marking unavailable shards as stale: [-XFUKWFTQBqAXj7m41uCiw]
[2021-05-07T11:50:13,601][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][4] marking unavailable shards as stale: [aueGtkiDS-m8gOQRp_9LRA]
[2021-05-07T11:50:13,962][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][4] marking unavailable shards as stale: [e-UeAxfKQOS4g2-agsrZfw]
[2021-05-07T11:50:15,002][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][3] marking unavailable shards as stale: [SDS1a3LyQCeUIsPRFDYcAQ]
[2021-05-07T11:50:15,881][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][3]]]).
[2021-05-07T11:51:22,921][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 251, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:51:32,926][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [251] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:51:52,926][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 251, reason: Publication{term=4, version=251}
[2021-05-07T11:51:52,930][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [251] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:52:02,937][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [252] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:52:07,930][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:52:08,523][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [32834ms] ago, timed out [22817ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [17556]
[2021-05-07T11:52:08,523][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [21817ms] ago, timed out [11809ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [17623]
[2021-05-07T11:52:08,524][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10808ms] ago, timed out [800ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [17684]
[2021-05-07T11:52:12,202][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [19216ms] ago, timed out [4404ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [17653]
[2021-05-07T11:52:22,962][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [252] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:52:22,966][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 253, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:52:26,127][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 253, reason: Publication{term=4, version=253}
[2021-05-07T11:56:06,101][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 254, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:56:16,105][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [254] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:56:20,819][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 254, reason: Publication{term=4, version=254}
[2021-05-07T11:56:30,885][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [255] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:56:50,918][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [255] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:57:43,107][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 256, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:57:47,420][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T11:57:53,110][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [256] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T11:58:00,268][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [49243ms] ago, timed out [39234ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [19329]
[2021-05-07T11:58:00,277][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [38233ms] ago, timed out [28227ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [19379]
[2021-05-07T11:58:00,277][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [27226ms] ago, timed out [17216ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [19433]
[2021-05-07T11:58:06,402][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 256, reason: Publication{term=4, version=256}
[2021-05-07T11:59:06,598][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 257, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T11:59:16,624][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [257] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T11:59:36,624][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 257, reason: Publication{term=4, version=257}
[2021-05-07T11:59:36,629][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [257] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T11:59:46,683][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [258] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:00:06,707][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [258] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:00:13,615][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 259, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:00:23,617][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [259] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T12:00:43,618][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 259, reason: Publication{term=4, version=259}
[2021-05-07T12:00:43,651][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [259] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T12:00:57,121][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 260, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:00:57,180][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 260, reason: Publication{term=4, version=260}
[2021-05-07T12:01:48,011][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 261, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:01:58,015][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [261] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:02:17,695][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [27432ms] ago, timed out [17420ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [20706]
[2021-05-07T12:02:17,696][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16419ms] ago, timed out [6404ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [20755]
[2021-05-07T12:02:18,015][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 261, reason: Publication{term=4, version=261}
[2021-05-07T12:02:18,018][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [261] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:02:18,019][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 262, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:02:28,021][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [262] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:02:48,021][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 262, reason: Publication{term=4, version=262}
[2021-05-07T12:02:48,024][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [262] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:02:58,025][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [263] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:03:19,425][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [264] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:03:30,615][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15013ms] ago, timed out [0ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [21191]
[2021-05-07T12:03:39,450][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [264] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:04:34,243][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [24023ms] ago, timed out [14013ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [21500]
[2021-05-07T12:04:34,259][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13012ms] ago, timed out [3004ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [21552]
[2021-05-07T12:04:48,662][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10414ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [21645]
[2021-05-07T12:07:09,084][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-1] [gc][4152] overhead, spent [716ms] collecting in the last [1.5s]
[2021-05-07T12:08:24,616][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [20418ms] ago, timed out [10409ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [22826]
[2021-05-07T12:08:34,813][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [265] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:08:50,593][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12412ms] ago, timed out [2401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [23022]
[2021-05-07T12:08:54,816][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [265] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:09:04,820][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [266] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:09:24,853][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [266] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:09:34,862][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [267] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:09:47,292][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10811ms] ago, timed out [801ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [23355]
[2021-05-07T12:09:47,635][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [36039ms] ago, timed out [26033ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [23216]
[2021-05-07T12:09:47,636][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [25032ms] ago, timed out [15014ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [23269]
[2021-05-07T12:09:47,637][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [14013ms] ago, timed out [4003ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [23334]
[2021-05-07T12:09:48,632][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [17016ms] ago, timed out [2002ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [23323]
[2021-05-07T12:09:54,874][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [267] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:09:54,879][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 268, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:10:01,030][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 268, reason: Publication{term=4, version=268}
[2021-05-07T12:10:01,053][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [53.8s] (2 delayed shards)
[2021-05-07T12:10:01,268][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][4] marking unavailable shards as stale: [nqtMYXeEQfmSxTYOLPahmw]
[2021-05-07T12:10:01,823][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] marking unavailable shards as stale: [fPBWPD_-R0KdFA2MyKZgYQ]
[2021-05-07T12:10:02,796][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance15min-v5][3]]]).
[2021-05-07T12:12:25,381][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 274, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:12:25,705][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 274, reason: Publication{term=4, version=274}
[2021-05-07T12:15:41,385][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 275, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:15:45,504][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 275, reason: Publication{term=4, version=275}
[2021-05-07T12:16:18,818][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 276, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:16:19,008][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 276, reason: Publication{term=4, version=276}
[2021-05-07T12:17:04,718][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 277, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:17:05,099][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 277, reason: Publication{term=4, version=277}
[2021-05-07T12:17:05,105][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 278, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:17:15,107][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [278] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:17:35,109][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 278, reason: Publication{term=4, version=278}
[2021-05-07T12:17:35,116][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [278] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:17:45,187][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [279] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:18:05,211][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [279] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:18:05,220][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded, {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 280, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr},{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:18:05,350][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr},{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 280, reason: Publication{term=4, version=280}
[2021-05-07T12:20:38,747][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 281, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:20:48,752][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [281] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T12:21:03,720][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [18015ms] ago, timed out [8005ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [26325]
[2021-05-07T12:21:08,752][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 281, reason: Publication{term=4, version=281}
[2021-05-07T12:21:08,756][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [281] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T12:21:37,741][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 282, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:21:37,811][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 282, reason: Publication{term=4, version=282}
[2021-05-07T12:21:49,447][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 283, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:21:59,450][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [283] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:22:19,451][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 283, reason: Publication{term=4, version=283}
[2021-05-07T12:22:19,454][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [283] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:22:22,475][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 284, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:22:22,710][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 284, reason: Publication{term=4, version=284}
[2021-05-07T12:22:53,721][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 285, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:23:03,724][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [285] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:23:14,676][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 285, reason: Publication{term=4, version=285}
[2021-05-07T12:23:14,680][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 286, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T12:23:24,682][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [286] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:23:44,686][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 286, reason: Publication{term=4, version=286}
[2021-05-07T12:23:44,689][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [286] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:23:54,693][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [287] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:24:01,400][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13820ms] ago, timed out [3803ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27194]
[2021-05-07T12:24:14,721][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [287] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:24:47,500][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [21423ms] ago, timed out [11413ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27397]
[2021-05-07T12:24:47,501][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10409ms] ago, timed out [400ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27458]
[2021-05-07T12:25:01,083][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12013ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27516]
[2021-05-07T12:25:01,659][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [24224ms] ago, timed out [9211ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27462]
[2021-05-07T12:25:14,690][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [285], although publication of cluster state version [286] completed [1.5m] ago
[2021-05-07T12:25:14,697][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 288, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:25:14,953][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 288, reason: Publication{term=4, version=288}
[2021-05-07T12:25:54,500][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 289, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:26:04,505][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [289] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:26:24,507][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 289, reason: Publication{term=4, version=289}
[2021-05-07T12:26:24,511][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [289] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:26:34,582][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [290] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:26:39,193][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [21614ms] ago, timed out [11607ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27990]
[2021-05-07T12:26:39,209][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [32640ms] ago, timed out [22615ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27939]
[2021-05-07T12:26:39,236][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [43653ms] ago, timed out [33640ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [27877]
[2021-05-07T12:26:54,611][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [290] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:26:54,618][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 291, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:26:56,380][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 291, reason: Publication{term=4, version=291}
[2021-05-07T12:26:56,400][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 292, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:26:57,211][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 292, reason: Publication{term=4, version=292}
[2021-05-07T12:26:57,233][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: disconnected], term: 4, version: 293, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:26:57,510][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 293, reason: Publication{term=4, version=293}
[2021-05-07T12:26:59,605][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 294, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:27:09,608][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [294] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:27:29,608][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 294, reason: Publication{term=4, version=294}
[2021-05-07T12:27:29,611][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [294] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:27:39,616][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [295] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:27:59,644][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [295] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:28:23,247][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [26858ms] ago, timed out [11814ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [28598]
[2021-05-07T12:29:29,645][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [294], although publication of cluster state version [295] completed [1.5m] ago
[2021-05-07T12:29:29,652][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 296, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:29:29,950][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 296, reason: Publication{term=4, version=296}
[2021-05-07T12:29:37,001][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 297, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:29:47,004][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [297] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:30:07,004][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 297, reason: Publication{term=4, version=297}
[2021-05-07T12:30:07,006][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [297] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:30:17,009][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [298] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:30:22,006][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T12:30:37,042][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [298] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:30:37,047][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 299, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:30:37,680][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 299, reason: Publication{term=4, version=299}
[2021-05-07T12:31:08,446][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10614ms] ago, timed out [600ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [29639]
[2021-05-07T12:31:52,409][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 300, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:32:02,413][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [300] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:32:22,414][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 300, reason: Publication{term=4, version=300}
[2021-05-07T12:32:22,418][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [300] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:32:32,423][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [301] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:32:52,447][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [301] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:33:15,065][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T12:33:29,903][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [29822ms] ago, timed out [14812ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [30317]
[2021-05-07T12:33:52,419][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [300] completed [1.5m] ago
[2021-05-07T12:33:52,424][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 302, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:33:53,224][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 302, reason: Publication{term=4, version=302}
[2021-05-07T12:33:53,249][WARN ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}]
org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][internal:cluster/coordination/join/validate] disconnected
[2021-05-07T12:38:43,117][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 303, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:38:53,121][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [303] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:39:13,122][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 303, reason: Publication{term=4, version=303}
[2021-05-07T12:39:13,125][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [303] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:39:23,129][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [304] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:39:29,283][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [34229ms] ago, timed out [24222ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [32151]
[2021-05-07T12:39:29,284][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [23221ms] ago, timed out [13209ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [32213]
[2021-05-07T12:39:29,285][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [45236ms] ago, timed out [35230ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [32098]
[2021-05-07T12:39:43,180][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [304] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T12:39:43,280][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 305, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:39:44,672][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 305, reason: Publication{term=4, version=305}
[2021-05-07T12:39:44,709][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 306, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:39:45,604][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 306, reason: Publication{term=4, version=306}
[2021-05-07T12:39:45,784][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: disconnected], term: 4, version: 307, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:39:46,059][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 307, reason: Publication{term=4, version=307}
[2021-05-07T12:39:58,109][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 308, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:40:08,111][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [308] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:40:28,111][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 308, reason: Publication{term=4, version=308}
[2021-05-07T12:40:28,114][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [308] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:40:38,117][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [309] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:40:58,139][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [309] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:40:58,143][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 310, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:40:59,785][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 310, reason: Publication{term=4, version=310}
[2021-05-07T12:41:23,660][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10212ms] ago, timed out [200ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [32930]
[2021-05-07T12:43:55,205][WARN ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][internal:cluster/coordination/join/validate] request_id [33426] timed out after [60067ms]
	at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T12:44:42,002][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 311, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:44:51,080][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [115931ms] ago, timed out [55864ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [33426]
[2021-05-07T12:44:52,010][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [311] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:45:12,010][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 311, reason: Publication{term=4, version=311}
[2021-05-07T12:45:12,015][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [311] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:45:22,022][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [312] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:45:27,017][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T12:45:42,050][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [312] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:45:42,053][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 313, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:45:42,370][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 313, reason: Publication{term=4, version=313}
[2021-05-07T12:49:47,616][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 314, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:49:57,621][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [314] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:50:17,622][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 314, reason: Publication{term=4, version=314}
[2021-05-07T12:50:17,628][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [314] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:50:27,682][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [315] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:50:47,703][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [315] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:50:57,704][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [316] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:51:17,709][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [316] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T12:51:48,917][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [33230ms] ago, timed out [18216ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [36079]
[2021-05-07T12:52:15,690][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T12:52:17,703][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [314], although publication of cluster state version [315] completed [1.5m] ago
[2021-05-07T12:52:17,708][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 317, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T12:52:18,219][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 317, reason: Publication{term=4, version=317}
[2021-05-07T12:55:55,429][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12213ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [37472]
[2021-05-07T13:02:33,006][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 318, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:02:43,009][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [318] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:03:03,011][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 318, reason: Publication{term=4, version=318}
[2021-05-07T13:03:03,018][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [318] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:03:13,082][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [319] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:03:33,103][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [319] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:03:33,107][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 320, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:03:35,769][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 320, reason: Publication{term=4, version=320}
[2021-05-07T13:06:13,143][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [28433ms] ago, timed out [18419ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [40468]
[2021-05-07T13:06:13,146][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [17418ms] ago, timed out [7405ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [40507]
[2021-05-07T13:06:38,636][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [22419ms] ago, timed out [12411ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [40601]
[2021-05-07T13:06:38,638][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [11410ms] ago, timed out [1401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [40638]
[2021-05-07T13:07:15,808][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16212ms] ago, timed out [6206ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [40779]
[2021-05-07T13:09:01,706][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 321, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:09:11,710][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [321] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:09:27,646][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13222ms] ago, timed out [3412ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [41453]
[2021-05-07T13:09:31,711][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 321, reason: Publication{term=4, version=321}
[2021-05-07T13:09:31,716][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [321] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:09:41,724][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [322] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:09:46,715][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T13:10:01,746][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [322] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:10:01,749][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 323, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:10:11,750][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [323] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:10:16,003][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T13:10:22,434][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 323, reason: Publication{term=4, version=323}
[2021-05-07T13:18:13,792][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16011ms] ago, timed out [6004ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [44042]
[2021-05-07T13:23:28,155][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 324, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:23:38,159][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [324] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:23:58,160][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 324, reason: Publication{term=4, version=324}
[2021-05-07T13:23:58,168][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [324] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:24:08,184][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [325] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:24:28,210][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [325] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:24:28,214][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 326, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:24:31,661][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 326, reason: Publication{term=4, version=326}
[2021-05-07T13:32:57,184][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 327, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:33:07,189][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [327] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T13:33:27,190][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 327, reason: Publication{term=4, version=327}
[2021-05-07T13:33:27,196][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [327] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:33:37,202][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [328] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:33:57,225][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [328] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:34:57,198][WARN ][o.e.c.c.LagDetector      ] [dev-sdnrdb-master-1] node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [327] completed [1.5m] ago
[2021-05-07T13:34:57,285][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: lagging], term: 4, version: 329, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T13:35:07,288][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [329] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_PUBLISH_REQUEST]
[2021-05-07T13:35:09,463][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [25025ms] ago, timed out [15011ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [49146]
[2021-05-07T13:35:09,471][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [14010ms] ago, timed out [4002ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [49185]
[2021-05-07T13:35:09,733][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13415ms] ago, timed out [3407ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [49188]
[2021-05-07T13:35:11,031][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 329, reason: Publication{term=4, version=329}
[2021-05-07T13:36:53,673][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [13624ms] ago, timed out [3613ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [49706]
[2021-05-07T13:40:03,367][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [29227ms] ago, timed out [19218ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [50543]
[2021-05-07T13:40:03,381][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [18217ms] ago, timed out [8206ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [50594]
[2021-05-07T13:46:05,257][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [20617ms] ago, timed out [10607ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [52292]
[2021-05-07T13:57:12,865][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 330, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T13:57:13,007][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 330, reason: Publication{term=4, version=330}
[2021-05-07T13:59:55,588][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 331, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T13:59:56,349][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 331, reason: Publication{term=4, version=331}
[2021-05-07T14:02:14,891][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 332, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:02:24,895][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [332] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T14:02:44,895][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 332, reason: Publication{term=4, version=332}
[2021-05-07T14:02:44,899][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [332] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:02:54,904][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [333] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:02:59,899][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T14:03:14,930][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [333] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:03:14,934][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 334, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:03:24,936][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [334] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T14:03:25,688][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T14:03:44,937][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 334, reason: Publication{term=4, version=334}
[2021-05-07T14:03:44,964][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [334] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T14:04:04,544][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 335, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T14:04:04,625][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 335, reason: Publication{term=4, version=335}
[2021-05-07T14:09:04,918][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 336, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T14:09:14,928][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [336] is still waiting for {dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} [SENT_APPLY_COMMIT]
[2021-05-07T14:09:30,649][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 336, reason: Publication{term=4, version=336}
[2021-05-07T14:10:14,735][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [11412ms] ago, timed out [1401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}], id [58937]
[2021-05-07T14:16:37,146][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} reason: followers check retry count exceeded], term: 4, version: 337, delta: removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T14:16:37,299][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 337, reason: Publication{term=4, version=337}
[2021-05-07T14:45:07,283][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r} join existing leader], term: 4, version: 338, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}
[2021-05-07T14:45:07,491][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-xr2pv}{SMERYDe9Sk63SJrnwRC7Og}{8GdOTleKTOCfw83sggwAkg}{fd00:100:0:0:0:0:0:29ae}{[fd00:100::29ae]:9300}{r}}, term: 4, version: 338, reason: Publication{term=4, version=338}
[2021-05-07T14:45:08,281][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 339, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:45:18,284][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [339] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:45:38,288][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 339, reason: Publication{term=4, version=339}
[2021-05-07T14:45:38,291][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [339] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:45:43,528][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [12208ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [67713]
[2021-05-07T14:45:43,645][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [23420ms] ago, timed out [13411ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [67649]
[2021-05-07T14:45:43,729][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [34429ms] ago, timed out [24421ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [67598]
[2021-05-07T14:45:46,667][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 341, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:45:46,748][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 341, reason: Publication{term=4, version=341}
[2021-05-07T14:45:46,783][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 342, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:45:46,984][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 342, reason: Publication{term=4, version=342}
[2021-05-07T14:45:46,994][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: disconnected], term: 4, version: 343, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:45:47,223][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 343, reason: Publication{term=4, version=343}
[2021-05-07T14:45:49,213][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 344, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:45:50,644][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 344, reason: Publication{term=4, version=344}
[2021-05-07T14:57:50,850][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update node information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T14:58:05,854][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-1] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
[2021-05-07T14:58:07,687][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 403, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:58:13,418][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 403, reason: Publication{term=4, version=403}
[2021-05-07T14:58:13,588][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [maintenancemode-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,605][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultlog-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,681][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [connectionlog-v5][2] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,717][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,789][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [eventlog-v5][2] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,807][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,900][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [guicutthrough-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:13,992][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:14,182][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations
[2021-05-07T14:58:14,198][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [faultcurrent-v5][4] primary-replica resync completed with 0 operations
[2021-05-07T14:58:14,203][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations
[2021-05-07T14:58:14,282][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-1] scheduling reroute for delayed shards in [53.4s] (36 delayed shards)
[2021-05-07T14:58:14,381][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [mediator-server-v5][3] primary-replica resync completed with 0 operations
[2021-05-07T14:58:14,481][INFO ][o.e.i.s.IndexShard       ] [dev-sdnrdb-master-1] [mediator-server-v5][2] primary-replica resync completed with 0 operations
[2021-05-07T14:59:08,481][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][3] marking unavailable shards as stale: [CVeC7avpTM6C2EVKAk-ofA]
[2021-05-07T14:59:08,795][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][2] marking unavailable shards as stale: [FhO3xlDSTE6DmdXCGTYWZQ]
[2021-05-07T14:59:08,796][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultlog-v5][4] marking unavailable shards as stale: [RWD8RSdET5-xeqEChYTUvA]
[2021-05-07T14:59:10,298][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][2] marking unavailable shards as stale: [Cg3b1dUjRvKpvjSB7iWFtA]
[2021-05-07T14:59:10,986][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][4] marking unavailable shards as stale: [fj7XqPvXSR-ProMoW48EFQ]
[2021-05-07T14:59:12,163][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [eventlog-v5][3] marking unavailable shards as stale: [cdNUXtLTToyCcPood3olyg]
[2021-05-07T14:59:12,163][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][2] marking unavailable shards as stale: [CTkVUoS1Q-OamT-PMEd21Q]
[2021-05-07T14:59:12,599][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][4] marking unavailable shards as stale: [wBlDJTu9RZiqI2P8irZbEQ]
[2021-05-07T14:59:13,942][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [guicutthrough-v5][3] marking unavailable shards as stale: [YffYsxIcSGurxi7irA7hyQ]
[2021-05-07T14:59:14,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][4] marking unavailable shards as stale: [oBHr77OdSpmosn8Z_MFTMw]
[2021-05-07T14:59:15,578][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][2] marking unavailable shards as stale: [Ei_6X8EdRAeTb5-mfUDVdg]
[2021-05-07T14:59:15,579][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [networkelement-connection-v5][3] marking unavailable shards as stale: [O621Z6OhRI-v0h4oUrGTEg]
[2021-05-07T14:59:16,006][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 422, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T14:59:26,009][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [422] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:59:46,010][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 422, reason: Publication{term=4, version=422}
[2021-05-07T14:59:46,016][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [422] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T14:59:56,021][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [423] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:00:01,577][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [15413ms] ago, timed out [600ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [73388]
[2021-05-07T15:00:16,023][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [423] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:00:16,026][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 424, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T15:00:22,467][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 424, reason: Publication{term=4, version=424}
[2021-05-07T15:00:22,498][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][2] marking unavailable shards as stale: [5smAWCEISbWiFi-OyXs0AQ]
[2021-05-07T15:00:22,498][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][1] marking unavailable shards as stale: [xuMx0_GQTUCm1xUOqn8UQQ]
[2021-05-07T15:00:24,090][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][4] marking unavailable shards as stale: [wEACq8LcTYWs5_admCUplA]
[2021-05-07T15:00:24,091][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance15min-v5][3] marking unavailable shards as stale: [Uc6xVc-PQ5qrAg11U6KqHA]
[2021-05-07T15:00:25,887][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][4] marking unavailable shards as stale: [_mpD3OZHTAmqsXbnyHGyeg]
[2021-05-07T15:00:26,649][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][2] marking unavailable shards as stale: [t_XMXNhxRxWsg6yGzPwF5w]
[2021-05-07T15:00:26,649][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [inventoryequipment-v5][3] marking unavailable shards as stale: [CRlMmCZzTmiP4nsRzqLDzw]
[2021-05-07T15:00:26,650][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][4] marking unavailable shards as stale: [iAbEJF3eRwWheliIETW7dw]
[2021-05-07T15:00:28,483][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][3] marking unavailable shards as stale: [dLi4XYmnQe2J7wcKoo7uzw]
[2021-05-07T15:00:28,708][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][3] marking unavailable shards as stale: [HdMa-NBOTF-TsytfQg54uw]
[2021-05-07T15:00:28,708][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [maintenancemode-v5][2] marking unavailable shards as stale: [zArku6pfS_6s-RKAqBVS5Q]
[2021-05-07T15:00:30,707][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][2] marking unavailable shards as stale: [bywqhjKLS_K0QIX6XgwWwg]
[2021-05-07T15:00:31,418][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][1] marking unavailable shards as stale: [daGWSdL5SxqQkR7cLUEszQ]
[2021-05-07T15:00:32,681][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [mediator-server-v5][4] marking unavailable shards as stale: [GeEDVyDxRje13lux-2w48Q]
[2021-05-07T15:00:32,681][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][2] marking unavailable shards as stale: [YJ7rABQkQ4e8d8-lwnjfFw]
[2021-05-07T15:00:33,095][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][4] marking unavailable shards as stale: [nrMrDzr9SVi_ob5orRMkBQ]
[2021-05-07T15:00:34,183][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [faultcurrent-v5][3] marking unavailable shards as stale: [K2gyqQBOSha1uflrh0LURw]
[2021-05-07T15:00:34,716][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][4] marking unavailable shards as stale: [tDay-lzUS_CgSxWKYxFy1Q]
[2021-05-07T15:00:36,694][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][3] marking unavailable shards as stale: [p2lqSWAIRySJ7VB6ixXPNw]
[2021-05-07T15:00:36,695][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [connectionlog-v5][2] marking unavailable shards as stale: [jyoNlOr9TaSB2KE83tTlag]
[2021-05-07T15:00:37,181][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][4] marking unavailable shards as stale: [HhM2KbEcRf25sC_W-Vb7nA]
[2021-05-07T15:00:38,460][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][2] marking unavailable shards as stale: [SD5Wy9VkRFaC8OvUuE6Y7g]
[2021-05-07T15:00:39,002][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][3] marking unavailable shards as stale: [VxcNdZq6QKONXD8J_F0H4w]
[2021-05-07T15:00:40,384][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] [historicalperformance24h-v5][1] marking unavailable shards as stale: [DBXh1-WyQUSa8fqA9kmIUw]
[2021-05-07T15:00:40,985][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[historicalperformance24h-v5][1]]]).
[2021-05-07T15:01:58,231][WARN ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][internal:cluster/coordination/join/validate] request_id [74180] timed out after [60059ms]
	at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T15:02:08,801][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [70467ms] ago, timed out [10408ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [74180]
[2021-05-07T15:02:09,206][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 461, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T15:02:19,209][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [9.8s] publication of cluster state version [461] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:02:39,209][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 461, reason: Publication{term=4, version=461}
[2021-05-07T15:02:39,212][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [29.8s] publication of cluster state version [461] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:02:49,216][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [462] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:03:09,242][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [30s] publication of cluster state version [462] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:03:09,250][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-left[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} reason: followers check retry count exceeded], term: 4, version: 463, delta: removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T15:03:10,030][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] removed {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 463, reason: Publication{term=4, version=463}
[2021-05-07T15:04:40,719][WARN ][o.e.c.c.Coordinator      ] [dev-sdnrdb-master-1] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-2][[fd00:100::29af]:9300][internal:cluster/coordination/join/validate] request_id [75068] timed out after [59875ms]
	at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-05-07T15:05:08,007][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [87300ms] ago, timed out [27425ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [75068]
[2021-05-07T15:05:08,108][INFO ][o.e.c.s.MasterService    ] [dev-sdnrdb-master-1] node-join[{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} join existing leader], term: 4, version: 464, delta: added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}
[2021-05-07T15:05:18,112][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [464] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T15:05:26,714][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-1] added {{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}}, term: 4, version: 464, reason: Publication{term=4, version=464}
[2021-05-07T15:05:36,726][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [465] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_APPLY_COMMIT]
[2021-05-07T15:06:07,406][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-1] after [10s] publication of cluster state version [470] is still waiting for {dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr} [SENT_PUBLISH_REQUEST]
[2021-05-07T15:13:35,527][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [10608ms] ago, timed out [601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [79179]
[2021-05-07T15:20:46,435][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16817ms] ago, timed out [6806ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [81511]
[2021-05-07T15:23:23,110][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [27630ms] ago, timed out [17618ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [82320]
[2021-05-07T15:23:23,115][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [16617ms] ago, timed out [6607ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [82376]
[2021-05-07T15:23:23,322][WARN ][o.e.t.TransportService   ] [dev-sdnrdb-master-1] Received response for a request that has timed out, sent [28230ms] ago, timed out [13212ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{7ZVQCWTJRt6RtmOTGcarLA}{anqtX2B6T1-bWmrUHejOxA}{fd00:100:0:0:0:0:0:29af}{[fd00:100::29af]:9300}{dmr}], id [82315]