By type
[38;5;6m [38;5;5m22:54:43.31 [0m [38;5;6m [38;5;5m22:54:43.37 [0m[1mWelcome to the Bitnami elasticsearch container[0m [38;5;6m [38;5;5m22:54:43.38 [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch[0m [38;5;6m [38;5;5m22:54:43.39 [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch/issues[0m [38;5;6m [38;5;5m22:54:43.39 [0m [38;5;6m [38;5;5m22:54:43.48 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch setup ** [38;5;6m [38;5;5m22:54:44.04 [0m[38;5;2mINFO [0m ==> Configuring/Initializing Elasticsearch... [38;5;6m [38;5;5m22:54:44.48 [0m[38;5;2mINFO [0m ==> Setting default configuration [38;5;6m [38;5;5m22:54:44.59 [0m[38;5;2mINFO [0m ==> Configuring Elasticsearch cluster settings... [38;5;6m [38;5;5m22:54:44.88 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-2: fd00:100::c1e3 10.242.193.227, will use fd00:100::c1e3 [38;5;6m [38;5;5m22:54:45.17 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-2: fd00:100::c1e3 10.242.193.227, will use fd00:100::c1e3 OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [38;5;6m [38;5;5m22:55:05.77 [0m[38;5;2mINFO [0m ==> ** Elasticsearch setup finished! ** [38;5;6m [38;5;5m22:55:06.07 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-05-03T22:55:45,573][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-05-03T22:55:45,575][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] JVM home [/opt/bitnami/java] [2021-05-03T22:55:45,576][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-3995451278267490088, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-05-03T22:56:02,373][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [aggs-matrix-stats] [2021-05-03T22:56:02,374][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [analysis-common] [2021-05-03T22:56:02,375][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [geo] [2021-05-03T22:56:02,375][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [ingest-common] [2021-05-03T22:56:02,376][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [ingest-geoip] [2021-05-03T22:56:02,376][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [ingest-user-agent] [2021-05-03T22:56:02,377][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [kibana] [2021-05-03T22:56:02,377][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [lang-expression] [2021-05-03T22:56:02,377][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [lang-mustache] [2021-05-03T22:56:02,378][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [lang-painless] [2021-05-03T22:56:02,378][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [mapper-extras] [2021-05-03T22:56:02,379][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [parent-join] [2021-05-03T22:56:02,379][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [percolator] [2021-05-03T22:56:02,380][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [rank-eval] [2021-05-03T22:56:02,380][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [reindex] [2021-05-03T22:56:02,381][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [repository-url] [2021-05-03T22:56:02,382][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [tasks] [2021-05-03T22:56:02,382][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded module [transport-netty4] [2021-05-03T22:56:02,383][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-2] loaded plugin [repository-s3] [2021-05-03T22:56:03,090][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-2] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.226:/dockerdata-nfs/dev/elastic-master-0)]], net usable_space [179gb], net total_space [195.8gb], types [nfs4] [2021-05-03T22:56:03,091][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-2] heap size [123.7mb], compressed ordinary object pointers [true] [2021-05-03T22:56:03,590][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] node name [dev-sdnrdb-master-2], node ID [GK-vkZRORy2PWdyDEikGTg], cluster name [sdnrdb-cluster] [2021-05-03T22:56:44,782][INFO ][o.e.t.NettyAllocator ] [dev-sdnrdb-master-2] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-05-03T22:56:45,888][INFO ][o.e.d.DiscoveryModule ] [dev-sdnrdb-master-2] using discovery type [zen] and seed hosts providers [settings] [2021-05-03T22:56:49,282][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-2] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-05-03T22:56:51,579][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] initialized [2021-05-03T22:56:51,580][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] starting ... [2021-05-03T22:56:52,678][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][1] overhead, spent [400ms] collecting in the last [1s] [2021-05-03T22:56:53,874][INFO ][o.e.t.TransportService ] [dev-sdnrdb-master-2] publish_address {[fd00:100::c1e3]:9300}, bound_addresses {[::]:9300} [2021-05-03T22:56:55,892][INFO ][o.e.b.BootstrapChecks ] [dev-sdnrdb-master-2] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-05-03T22:56:57,677][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] setting initial configuration to VotingConfiguration{ZY_kNaGnTA6BB4xy93CdRw,DXyNlIdYTHuhjDcRF-thrg,GK-vkZRORy2PWdyDEikGTg} [2021-05-03T22:57:00,692][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}]}, added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-pjjv6}{NLC9Jqm4RXaY3L413qjF3w}{lB0MULn5THOnqzGECWXhOA}{fd00:100:0:0:0:0:0:238c}{[fd00:100::238c]:9300}{r},{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr},{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 3, version: 4, reason: ApplyCommitRequest{term=3, version=4, sourceNode={dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-03T22:57:01,185][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-2] publish_address {[fd00:100::c1e3]:9200}, bound_addresses {[::]:9200} [2021-05-03T22:57:01,186][INFO ][o.e.n.Node ] [dev-sdnrdb-master-2] started [2021-05-03T22:57:32,173][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][40] overhead, spent [428ms] collecting in the last [1.2s] [2021-05-03T22:57:32,175][INFO ][o.e.c.s.ClusterSettings ] [dev-sdnrdb-master-2] updating [action.auto_create_index] from [true] to [false] [2021-05-03T22:58:05,887][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][73] overhead, spent [691ms] collecting in the last [1.5s] [2021-05-03T22:58:31,874][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][94] overhead, spent [319ms] collecting in the last [1s] [2021-05-03T23:09:26,101][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [12009ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [1804] [2021-05-03T23:10:15,010][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] master node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}] failed, restarting discovery org.elasticsearch.ElasticsearchException: node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}] failed [3] consecutive checks at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:293) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1073) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-1][[fd00:100::8ad]:9300][internal:coordination/fault_detection/leader_check] request_id [1884] timed out after [10007ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) ~[elasticsearch-7.9.3.jar:7.9.3] ... 4 more [2021-05-03T23:10:15,080][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], current []}, term: 3, version: 76, reason: becoming candidate: onLeaderFailure [2021-05-03T23:10:16,372][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 5, version: 77, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-03T23:10:16,378][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: failed to commit cluster state version [77] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-05-03T23:10:16,382][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-2] failed to join {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=3, lastAcceptedVersion=76, sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, targetNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [dev-sdnrdb-master-2][[fd00:100::c1e3]:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-05-03T23:10:17,204][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}]}, term: 5, version: 77, reason: ApplyCommitRequest{term=5, version=77, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:10:26,532][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [43667ms] ago, timed out [33652ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [1854] [2021-05-03T23:10:26,534][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [32652ms] ago, timed out [22644ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [1869] [2021-05-03T23:10:26,553][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [21644ms] ago, timed out [11637ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [1884] [2021-05-03T23:11:16,176][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-2] failed to join {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=3, lastAcceptedVersion=76, sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}]} org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-0][[fd00:100::cb20]:9300][internal:cluster/coordination/join] request_id [1932] timed out after [59930ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-05-03T23:11:16,179][INFO ][o.e.c.c.JoinHelper ] [dev-sdnrdb-master-2] failed to join {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} with JoinRequest{sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=3, lastAcceptedVersion=76, sourceNode={dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, targetNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}]} org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-0][[fd00:100::cb20]:9300][internal:cluster/coordination/join] request_id [1932] timed out after [59930ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-05-03T23:11:16,590][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [60330ms] ago, timed out [400ms] ago, action [internal:cluster/coordination/join], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [1932] [2021-05-03T23:11:18,040][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 79, reason: ApplyCommitRequest{term=5, version=79, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:11:18,376][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-05-03T23:11:18,474][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-05-03T23:11:18,590][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-05-03T23:11:18,682][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-05-03T23:12:44,601][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 125, reason: ApplyCommitRequest{term=5, version=125, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:22:00,845][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 190, reason: ApplyCommitRequest{term=5, version=190, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:22:01,073][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [maintenancemode-v5][4] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,172][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,292][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,294][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,381][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,478][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,574][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,576][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][1] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,692][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-05-03T23:22:01,784][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [networkelement-connection-v5][1] primary-replica resync completed with 0 operations [2021-05-03T23:23:10,284][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 212, reason: ApplyCommitRequest{term=5, version=212, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:41:00,360][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 274, reason: ApplyCommitRequest{term=5, version=274, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:41:00,486][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][1] primary-replica resync completed with 0 operations [2021-05-03T23:45:03,393][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 337, reason: ApplyCommitRequest{term=5, version=337, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-03T23:46:03,541][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 339, reason: ApplyCommitRequest{term=5, version=339, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:04:39,505][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 5, version: 340, reason: ApplyCommitRequest{term=5, version=340, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:09:09,114][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [16815ms] ago, timed out [6808ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [11953] [2021-05-04T00:17:00,226][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [18414ms] ago, timed out [8408ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [12863] [2021-05-04T00:17:18,498][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [15210ms] ago, timed out [5204ms] ago, action [internal:coordination/fault_detection/leader_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [12889] [2021-05-04T00:17:19,435][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], current []}, term: 5, version: 403, reason: becoming candidate: joinLeaderInTerm [2021-05-04T00:17:20,213][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}]}, removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 6, version: 405, reason: ApplyCommitRequest{term=6, version=405, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:17:20,284][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,372][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,393][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [maintenancemode-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,491][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,575][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,585][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,677][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,690][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,773][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,776][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,892][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [networkelement-connection-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:17:20,902][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-05-04T00:18:07,929][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 6, version: 449, reason: ApplyCommitRequest{term=6, version=449, sourceNode={dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:18:54,096][WARN ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][4913] overhead, spent [502ms] collecting in the last [1s] [2021-05-04T00:25:27,568][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] master node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] failed, restarting discovery org.elasticsearch.ElasticsearchException: node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] failed [3] consecutive checks at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:293) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1073) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-0][[fd00:100::cb20]:9300][internal:coordination/fault_detection/leader_check] request_id [14429] timed out after [10007ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) ~[elasticsearch-7.9.3.jar:7.9.3] ... 4 more [2021-05-04T00:25:27,583][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], current []}, term: 6, version: 514, reason: becoming candidate: onLeaderFailure [2021-05-04T00:25:27,986][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 515, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T00:25:38,084][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [515] is still waiting for {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:25:58,089][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 7, version: 515, reason: Publication{term=7, version=515} [2021-05-04T00:25:58,290][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [515] is still waiting for {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:26:00,277][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 7, version: 516, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:26:10,380][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [9.8s] publication of cluster state version [516] is still waiting for {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} [WAITING_FOR_QUORUM], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-pjjv6}{NLC9Jqm4RXaY3L413qjF3w}{lB0MULn5THOnqzGECWXhOA}{fd00:100:0:0:0:0:0:238c}{[fd00:100::238c]:9300}{r} [WAITING_FOR_QUORUM] [2021-05-04T00:26:28,180][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T00:26:30,380][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 7, version: 516, reason: Publication{term=7, version=516} [2021-05-04T00:26:30,476][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,485][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,577][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,595][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,686][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,697][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [guicutthrough-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,780][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:30,890][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-05-04T00:26:31,075][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [29.1s] (37 delayed shards) [2021-05-04T00:26:31,077][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30.7s] publication of cluster state version [516] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T00:26:31,272][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultcurrent-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:26:31,273][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][0] primary-replica resync completed with 0 operations [2021-05-04T00:27:08,645][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][3] marking unavailable shards as stale: [v2G45ZdLR0OLfWGX4tcqUg] [2021-05-04T00:27:10,127][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][0] marking unavailable shards as stale: [h4m53rqORHCP9gsIfjBEtA] [2021-05-04T00:27:10,128][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][0] marking unavailable shards as stale: [jy_cyPZFQVawpFeMRIGwzg] [2021-05-04T00:27:10,128][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][2] marking unavailable shards as stale: [a_tQc4bdS0KT2H6zGOEUMQ] [2021-05-04T00:27:13,210][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][2] marking unavailable shards as stale: [Q_aEzb_PSeGPsx0fTlTWoQ] [2021-05-04T00:27:15,178][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][2] marking unavailable shards as stale: [uCF1iNlPRAiR-PyoIe7eLg] [2021-05-04T00:27:15,179][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] marking unavailable shards as stale: [RA5wB02BQEixJVl_c37Guw] [2021-05-04T00:27:15,180][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][0] marking unavailable shards as stale: [okXr-o8FRMKvwMa14T8Yrg] [2021-05-04T00:27:17,463][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} join existing leader], term: 7, version: 527, delta: added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:27:20,089][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 7, version: 527, reason: Publication{term=7, version=527} [2021-05-04T00:27:39,020][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][2]]]). [2021-05-04T00:35:39,067][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [23425ms] ago, timed out [13416ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [17994] [2021-05-04T00:35:47,777][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 7, version: 588, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:35:49,823][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T00:35:57,785][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [588] is still waiting for {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} [WAITING_FOR_QUORUM], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-pjjv6}{NLC9Jqm4RXaY3L413qjF3w}{lB0MULn5THOnqzGECWXhOA}{fd00:100:0:0:0:0:0:238c}{[fd00:100::238c]:9300}{r} [WAITING_FOR_QUORUM] [2021-05-04T00:36:05,131][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [38443ms] ago, timed out [28429ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [18047] [2021-05-04T00:36:17,795][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 7, version: 587, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T00:36:17,795][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [588] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$2.run(Coordinator.java:1343) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.ElasticsearchException: publication cancelled before committing: timed out after 30s at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:86) ~[elasticsearch-7.9.3.jar:7.9.3] ... 5 more [2021-05-04T00:36:17,876][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$2.run(Coordinator.java:1343) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.ElasticsearchException: publication cancelled before committing: timed out after 30s at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:86) ~[elasticsearch-7.9.3.jar:7.9.3] ... 5 more [2021-05-04T00:36:20,191][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 8, version: 589, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T00:36:21,172][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 8, version: 589, reason: Publication{term=8, version=589} [2021-05-04T00:36:21,575][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-05-04T00:36:21,582][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:36:21,677][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-05-04T00:36:21,775][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [25.9s] (37 delayed shards) [2021-05-04T00:36:21,972][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:36:22,072][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-05-04T00:36:48,518][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][3] marking unavailable shards as stale: [V8-j3kwzRLWTYSSwybE9Ow] [2021-05-04T00:36:48,824][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][1] marking unavailable shards as stale: [v3JLnlxuQUGVw-pdZQSnRg] [2021-05-04T00:36:48,847][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][4] marking unavailable shards as stale: [mQay-2o2TVSKvA9fHZKtKw] [2021-05-04T00:36:48,847][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][2] marking unavailable shards as stale: [D5d_yxkzQBuCAdZAsax01A] [2021-05-04T00:36:51,489][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][2] marking unavailable shards as stale: [0trS6oFtT2Kvr1ajF-lyIw] [2021-05-04T00:36:52,050][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] marking unavailable shards as stale: [7fAdqirsRKKWILW-fV7fqA] [2021-05-04T00:36:53,352][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][4] marking unavailable shards as stale: [3UW7nL8uSI6rmwfIbxKN0g] [2021-05-04T00:36:53,353][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][3] marking unavailable shards as stale: [aliKKyp9Sgqdn6YdsLakEA] [2021-05-04T00:36:57,513][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][3] marking unavailable shards as stale: [SjjfsCffSleXOkJ-2yoqiw] [2021-05-04T00:36:58,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][2] marking unavailable shards as stale: [tzoNOGuKQ9u9WJFpzd38WQ] [2021-05-04T00:36:58,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][4] marking unavailable shards as stale: [QN-2zF3sTdmEGFOFLbI1Ww] [2021-05-04T00:36:58,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][1] marking unavailable shards as stale: [Efj24_AyQci56sBrq9k66A] [2021-05-04T00:37:02,272][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][3] marking unavailable shards as stale: [pT1M4eHsQPejTCzKKdG6Ww] [2021-05-04T00:37:02,440][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][2] marking unavailable shards as stale: [2LR7hpnRTFegiqdb1KOWLg] [2021-05-04T00:37:04,425][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][0] marking unavailable shards as stale: [fFTgLZqnS2qEUdfGFaA7UQ] [2021-05-04T00:37:04,425][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][4] marking unavailable shards as stale: [GPYbxiqNSXCJiT0zTzMd7w] [2021-05-04T00:37:05,469][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][2] marking unavailable shards as stale: [7zxsWiC0Q06PbUkhv87dow] [2021-05-04T00:37:07,983][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][3] marking unavailable shards as stale: [bjEIpT4LRMiA3JKvlMHqUw] [2021-05-04T00:37:08,890][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][0] marking unavailable shards as stale: [1Ihu5YPDSDy4ni0Tmk04rw] [2021-05-04T00:37:08,890][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][1] marking unavailable shards as stale: [4uCemLPFTbWSh1sQlbmNpA] [2021-05-04T00:37:10,364][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][3] marking unavailable shards as stale: [SFUID3-YRWKJtVqNVzvWjQ] [2021-05-04T00:37:16,859][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][2] marking unavailable shards as stale: [QlIPhvlaSGqfymMeqoNthg] [2021-05-04T00:37:17,697][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][0] marking unavailable shards as stale: [gpz0ZyYMR0C7LhKMIRAjoQ] [2021-05-04T00:37:18,076][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][3] marking unavailable shards as stale: [z2Z5Hk7XTKWBXcoqqQ3D3g] [2021-05-04T00:37:19,878][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][1] marking unavailable shards as stale: [mF5qpgIqT_CNQ3rNBBceIQ] [2021-05-04T00:37:22,380][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][1] marking unavailable shards as stale: [4rRfH1U_SRWWdETuAysTXA] [2021-05-04T00:37:22,381][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][0] marking unavailable shards as stale: [r4Bo-7RcRjiBfuEjflT2KQ] [2021-05-04T00:37:27,841][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][4] marking unavailable shards as stale: [x1qGmAHrTouKAZhMGI9_Fg] [2021-05-04T00:37:30,663][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][0] marking unavailable shards as stale: [vj_GrxPqS42OwJF1ZdHHfw] [2021-05-04T00:37:30,664][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][1] marking unavailable shards as stale: [IrzK6QFgRJSaVqWEg9iCgw] [2021-05-04T00:37:31,582][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][3] marking unavailable shards as stale: [Wq_0KVZeR_a-fIfyJqOdNw] [2021-05-04T00:37:33,434][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][2] marking unavailable shards as stale: [5cXKk-1nRzmrwXQL6R2JDA] [2021-05-04T00:37:34,014][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][0] marking unavailable shards as stale: [aUugJz__Q6GR-8WO_oyXww] [2021-05-04T00:37:34,385][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][4] marking unavailable shards as stale: [2fCYyV5_TbqoRhHusSv7CQ] [2021-05-04T00:37:35,672][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][0] marking unavailable shards as stale: [5YCi0djmS2WonTICipWf7w] [2021-05-04T00:37:37,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][1] marking unavailable shards as stale: [IzNc33DsSyq8hVlD_kqUPw] [2021-05-04T00:37:37,775][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][0] marking unavailable shards as stale: [d6sgkWoFRnCAoVYvAzpsnw] [2021-05-04T00:37:38,388][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][0]]]). [2021-05-04T00:39:50,874][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} join existing leader], term: 8, version: 651, delta: added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:39:52,904][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 8, version: 651, reason: Publication{term=8, version=651} [2021-05-04T00:48:15,178][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 8, version: 712, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:48:18,350][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 8, version: 712, reason: Publication{term=8, version=712} [2021-05-04T00:48:18,480][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,493][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,582][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,599][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,688][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,701][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,775][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:48:18,785][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [56.3s] (36 delayed shards) [2021-05-04T00:48:18,787][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T00:48:18,973][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [maintenancemode-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:48:19,073][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-05-04T00:49:15,825][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][3] marking unavailable shards as stale: [juqI-ObMTYK5YFDDqDHLrg] [2021-05-04T00:49:16,609][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] marking unavailable shards as stale: [zEhxTHrMRXScyPjNky9zYw] [2021-05-04T00:49:16,610][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][4] marking unavailable shards as stale: [AOiDb5yBRByB97jI9uVqrQ] [2021-05-04T00:49:16,610][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][2] marking unavailable shards as stale: [8aiQBSz_TaeTj1JIxWyGoA] [2021-05-04T00:49:18,279][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][2] marking unavailable shards as stale: [kTgYcezpRxyat-NhdkCZcg] [2021-05-04T00:49:18,611][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][2] marking unavailable shards as stale: [-kyt2c_fQyS633l4QcHsvA] [2021-05-04T00:49:18,611][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][4] marking unavailable shards as stale: [bNaADJxEQRO7OhTtnmCwPA] [2021-05-04T00:49:20,111][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][4] marking unavailable shards as stale: [e1HVnFnDQYe-xtUGzenT1Q] [2021-05-04T00:49:20,389][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][1] marking unavailable shards as stale: [J9gHSydMTimvX3FsJRjlIg] [2021-05-04T00:49:21,961][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][4] marking unavailable shards as stale: [xMNLD1M1Q6iXSYhOE8rbAg] [2021-05-04T00:49:21,972][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][3] marking unavailable shards as stale: [QvDotjpjRNKV2E9GGmAAvw] [2021-05-04T00:49:22,382][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][2] marking unavailable shards as stale: [MvwQOzuaSJeH2I0N-anTnQ] [2021-05-04T00:49:23,378][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][2] marking unavailable shards as stale: [BDjVLhi5TxyzLdRRGcPTHA] [2021-05-04T00:49:23,898][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][3] marking unavailable shards as stale: [xvqu2UDbSIOgXaIRKrcV8Q] [2021-05-04T00:49:24,926][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][3] marking unavailable shards as stale: [7unT03CpTOaReG-KBJp9sQ] [2021-05-04T00:49:24,927][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][4] marking unavailable shards as stale: [2lBxRH5IRWahb-EjZen29w] [2021-05-04T00:49:25,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][2] marking unavailable shards as stale: [0V4_MDT2QHGLdKtuvd5OFQ] [2021-05-04T00:49:26,388][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][1] marking unavailable shards as stale: [u1KpZpMxTL-nOvWnooVwag] [2021-05-04T00:49:26,700][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][3] marking unavailable shards as stale: [qlipXDb_SrewraPpD8HI-Q] [2021-05-04T00:49:27,675][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][4] marking unavailable shards as stale: [Do1rEsYeSgOdEYj9jsiKgg] [2021-05-04T00:49:27,675][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][2] marking unavailable shards as stale: [QkWi-zL4S02MIDKLamEOPQ] [2021-05-04T00:49:28,822][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][4] marking unavailable shards as stale: [HDJzv-RzT1i2x1qKouMKyA] [2021-05-04T00:49:29,189][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][3] marking unavailable shards as stale: [5n_y2vwYRKWQdexATZZ2Bg] [2021-05-04T00:49:30,060][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][2] marking unavailable shards as stale: [RmOPdH3TSCCA7KXSjUgnDg] [2021-05-04T00:49:30,060][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][3] marking unavailable shards as stale: [MjdfvB-kQfitkXubKyKDPg] [2021-05-04T00:49:31,475][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][4] marking unavailable shards as stale: [owKiS1TBQfWOuf_z1Kxf3g] [2021-05-04T00:49:31,815][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][2] marking unavailable shards as stale: [DAh7RKbaSNeXERkIf9bHeA] [2021-05-04T00:49:33,673][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][2] marking unavailable shards as stale: [VB0Mw00vQ4mkCu5mvHhEfA] [2021-05-04T00:49:33,674][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][3] marking unavailable shards as stale: [WpDDLPI8QMqyoBPSp7n7lg] [2021-05-04T00:49:34,145][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][4] marking unavailable shards as stale: [OyUUo2aPRDKP0eDRpb_hYA] [2021-05-04T00:49:34,694][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][4] marking unavailable shards as stale: [qbyCIgguTEynZaL6UorGOQ] [2021-05-04T00:49:35,311][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][1] marking unavailable shards as stale: [Ye6HqUIhR2mRsKM7CZdgUg] [2021-05-04T00:49:35,952][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][2] marking unavailable shards as stale: [fZq4XPQ_Q-G8fw0n5NwsVA] [2021-05-04T00:49:35,952][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][3] marking unavailable shards as stale: [AWabyulwRWSpy20kpJV1lw] [2021-05-04T00:49:36,240][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][3] marking unavailable shards as stale: [OO8rkJbnR4eWpV9xriM69w] [2021-05-04T00:49:37,274][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][4] marking unavailable shards as stale: [jw4O3ZXuRgKMDA0p5Hebkw] [2021-05-04T00:49:38,202][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][4]]]). [2021-05-04T00:51:18,976][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T00:52:44,297][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} join existing leader], term: 8, version: 771, delta: added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T00:52:48,954][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 8, version: 771, reason: Publication{term=8, version=771} [2021-05-04T00:52:58,964][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [772] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T00:53:11,514][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [773] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T00:53:22,072][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [774] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T00:53:41,836][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [776] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:54:15,473][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [796] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:55:35,773][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [805] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:55:56,377][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [807] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:56:02,462][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [13408ms] ago, timed out [3402ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [26415] [2021-05-04T00:56:16,382][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [807] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:56:26,386][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [808] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:56:43,861][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [24629ms] ago, timed out [14622ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [26572] [2021-05-04T00:56:43,861][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [13621ms] ago, timed out [3806ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [26611] [2021-05-04T00:56:46,395][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [808] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:56:56,399][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [809] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:57:08,618][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [810] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:57:26,064][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [811] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:57:46,068][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [811] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:57:56,072][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [812] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:58:02,877][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T00:58:16,095][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [812] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T00:58:16,575][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 8, version: 813, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T00:58:17,879][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T00:58:17,881][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 8, version: 813, reason: Publication{term=8, version=813} [2021-05-04T00:58:17,976][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [connectionlog-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:58:17,991][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,076][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [guicutthrough-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,088][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,173][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,190][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [58.3s] (42 delayed shards) [2021-05-04T00:58:18,273][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,276][WARN ][o.e.i.s.RetentionLeaseSyncAction] [dev-sdnrdb-master-2] [[faultcurrent-v5][1]] failed to perform indices:admin/seq_no/retention_lease_sync on replica [faultcurrent-v5][1], node[ZY_kNaGnTA6BB4xy93CdRw], [R], s[STARTED], a[id=H4G9E-grRnewYEsCIIiK8w] org.elasticsearch.client.transport.NoNodeAvailableException: unknown node [ZY_kNaGnTA6BB4xy93CdRw] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1084) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.ReplicationOperation$3.tryAction(ReplicationOperation.java:244) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$1.doRun(RetryableAction.java:99) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Suppressed: org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-1][[fd00:100::8ad]:9300][indices:admin/seq_no/retention_lease_sync[r]] disconnected [2021-05-04T00:58:18,377][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][1] marking unavailable shards as stale: [H4G9E-grRnewYEsCIIiK8w] [2021-05-04T00:58:18,377][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][1] marking unavailable shards as stale: [otQ5SVg2ROK5E6IcWntE_g] [2021-05-04T00:58:18,472][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:58:18,578][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultcurrent-v5][2] primary-replica resync completed with 0 operations [2021-05-04T00:58:36,162][ERROR][o.e.c.a.s.ShardStateAction] [dev-sdnrdb-master-2] [inventoryequipment-v5][3] unexpected failure while failing shard [shard id [[inventoryequipment-v5][3]], allocation id [5_D2lmSXSfam_xEgXS82Vw], primary term [5], message [failed to perform indices:admin/seq_no/retention_lease_sync on replica [inventoryequipment-v5][3], node[DXyNlIdYTHuhjDcRF-thrg], [R], s[STARTED], a[id=5_D2lmSXSfam_xEgXS82Vw]], failure [RemoteTransportException[[dev-sdnrdb-master-0][[fd00:100::cb20]:9300][indices:admin/seq_no/retention_lease_sync[r]]]; nested: IllegalStateException[[inventoryequipment-v5][3] operation primary term [5] is too old (current [6])]; ], markAsStale [true]] org.elasticsearch.cluster.action.shard.ShardStateAction$NoLongerPrimaryShardException: primary term [5] did not match current primary term [6] at org.elasticsearch.cluster.action.shard.ShardStateAction$ShardFailedClusterStateTaskExecutor.execute(ShardStateAction.java:365) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-05-04T00:58:38,173][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 8, version: 818, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T00:58:41,197][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 8, version: 818, reason: Publication{term=8, version=818} [2021-05-04T00:59:18,435][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][4]]]). [2021-05-04T01:01:04,772][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 8, version: 900, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:01:09,107][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:01:14,880][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [900] is still waiting for {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} [WAITING_FOR_QUORUM], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-pjjv6}{NLC9Jqm4RXaY3L413qjF3w}{lB0MULn5THOnqzGECWXhOA}{fd00:100:0:0:0:0:0:238c}{[fd00:100::238c]:9300}{r} [WAITING_FOR_QUORUM] [2021-05-04T01:01:21,995][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:01:24,108][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:01:34,013][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [61318ms] ago, timed out [51311ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [28914] [2021-05-04T01:01:34,014][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [50310ms] ago, timed out [40303ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [28955] [2021-05-04T01:01:34,014][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [39303ms] ago, timed out [29295ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [29012] [2021-05-04T01:01:34,587][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [42504ms] ago, timed out [27623ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [28989] [2021-05-04T01:01:34,588][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [40503ms] ago, timed out [25422ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [29007] [2021-05-04T01:01:34,877][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 8, version: 899, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T01:01:34,929][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [900] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$2.run(Coordinator.java:1343) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.ElasticsearchException: publication cancelled before committing: timed out after 30s at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:86) ~[elasticsearch-7.9.3.jar:7.9.3] ... 5 more [2021-05-04T01:01:34,933][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$2.run(Coordinator.java:1343) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.ElasticsearchException: publication cancelled before committing: timed out after 30s at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:86) ~[elasticsearch-7.9.3.jar:7.9.3] ... 5 more [2021-05-04T01:01:35,887][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 9, version: 901, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:01:45,952][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [901] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:02:05,953][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 9, version: 901, reason: Publication{term=9, version=901} [2021-05-04T01:02:06,080][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultlog-v5][3] primary-replica resync completed with 0 operations [2021-05-04T01:02:06,173][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-05-04T01:02:06,191][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-05-04T01:02:06,274][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [0s] (37 delayed shards) [2021-05-04T01:02:06,276][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30.2s] publication of cluster state version [901] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:02:06,376][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-05-04T01:02:06,673][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-2] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-05-04T01:02:16,373][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [9.8s] publication of cluster state version [902] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:02:22,000][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:02:24,111][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:02:31,751][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [25430ms] ago, timed out [10416ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29420] [2021-05-04T01:02:31,754][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [37686ms] ago, timed out [22627ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29360] [2021-05-04T01:02:32,439][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [40488ms] ago, timed out [25431ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29342] [2021-05-04T01:02:34,762][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [36685ms] ago, timed out [26830ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29378] [2021-05-04T01:02:34,763][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [47692ms] ago, timed out [37686ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29317] [2021-05-04T01:02:34,763][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [58705ms] ago, timed out [48695ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [29265] [2021-05-04T01:02:36,476][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 9, version: 903, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:02:37,735][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 903, reason: Publication{term=9, version=903} [2021-05-04T01:02:37,738][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [58.5s] (36 delayed shards) [2021-05-04T01:02:37,739][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 9, version: 904, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:02:37,993][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 904, reason: Publication{term=9, version=904} [2021-05-04T01:02:38,020][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: disconnected], term: 9, version: 905, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:02:39,388][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 905, reason: Publication{term=9, version=905} [2021-05-04T01:02:41,283][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 9, version: 908, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:02:42,004][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 908, reason: Publication{term=9, version=908} [2021-05-04T01:02:51,647][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[faultcurrent-v5][4], [guicutthrough-v5][4], [networkelement-connection-v5][2], [guicutthrough-v5][2]]]). [2021-05-04T01:03:41,757][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultcurrent-v5][2]]]). [2021-05-04T01:09:24,636][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:09:28,439][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [11409ms] ago, timed out [1401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [32901] [2021-05-04T01:09:28,753][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [19014ms] ago, timed out [4002ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [32875] [2021-05-04T01:10:14,786][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:10:16,160][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 9, version: 979, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:10:26,166][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [979] is still waiting for {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} [WAITING_FOR_QUORUM], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-coordinating-only-65cbf8cbbd-pjjv6}{NLC9Jqm4RXaY3L413qjF3w}{lB0MULn5THOnqzGECWXhOA}{fd00:100:0:0:0:0:0:238c}{[fd00:100::238c]:9300}{r} [WAITING_FOR_QUORUM] [2021-05-04T01:10:27,610][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:10:28,605][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:10:29,789][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:10:46,164][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 9, version: 979, reason: Publication{term=9, version=979} [2021-05-04T01:10:46,171][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [29.9s] (37 delayed shards) [2021-05-04T01:10:46,172][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [979] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:10:54,973][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} join existing leader], term: 9, version: 980, delta: added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:11:04,976][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [980] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:11:11,753][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [11617ms] ago, timed out [1605ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [33493] [2021-05-04T01:11:24,977][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}}, term: 9, version: 980, reason: Publication{term=9, version=980} [2021-05-04T01:11:24,982][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [980] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:11:34,988][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [981] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:11:50,474][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [982] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:12:13,478][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [983] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:12:34,063][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [986] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:12:48,504][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [987] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:12:52,833][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [10009ms] ago, timed out [0ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34092] [2021-05-04T01:13:08,506][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [29.8s] publication of cluster state version [987] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:13:11,915][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [11207ms] ago, timed out [1201ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34174] [2021-05-04T01:13:18,511][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [988] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:13:38,515][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [988] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:13:49,022][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [989] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:14:09,025][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [989] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:14:11,070][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:14:14,429][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:14:19,032][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [990] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:14:24,819][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [28221ms] ago, timed out [13209ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34460] [2021-05-04T01:14:24,820][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [28821ms] ago, timed out [13810ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34447] [2021-05-04T01:14:24,822][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [40453ms] ago, timed out [25419ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34399] [2021-05-04T01:14:24,981][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [47057ms] ago, timed out [37050ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34370] [2021-05-04T01:14:24,988][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [36049ms] ago, timed out [26020ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34426] [2021-05-04T01:14:24,988][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [25019ms] ago, timed out [15011ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [34476] [2021-05-04T01:14:38,507][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-2] node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}] is lagging at cluster state version [986], although publication of cluster state version [987] completed [1.5m] ago [2021-05-04T01:14:39,033][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [990] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:14:39,072][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded, {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: lagging], term: 9, version: 991, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:14:40,401][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 991, reason: Publication{term=9, version=991} [2021-05-04T01:14:40,404][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [58.6s] (36 delayed shards) [2021-05-04T01:14:46,255][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][4] marking unavailable shards as stale: [Tc1HknsTRwyMx2XASyXGug] [2021-05-04T01:14:46,256][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][4] marking unavailable shards as stale: [hGuCDmvVSdGipaWFV_LP1w] [2021-05-04T01:14:49,483][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][2] marking unavailable shards as stale: [-J0Q-z8dSgaTHPAw11ucyg] [2021-05-04T01:14:49,484][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][3] marking unavailable shards as stale: [XG-ouyRDTLCqDLv2AmMi_w] [2021-05-04T01:14:51,680][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][2] marking unavailable shards as stale: [H7Ec9AMgSvqGaSiqoqHXpA] [2021-05-04T01:14:52,373][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][4] marking unavailable shards as stale: [CSNWolWjSTOMNOmQeRHQJg] [2021-05-04T01:14:53,892][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][2] marking unavailable shards as stale: [Xb9cUt6jR-Gzh3TT1zGGRA] [2021-05-04T01:14:56,798][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][4] marking unavailable shards as stale: [JrOHx36oR0C86Y3v-JpVqA] [2021-05-04T01:14:57,382][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][4] marking unavailable shards as stale: [dWe9KO4lTHuO4XhUWSPQGA] [2021-05-04T01:14:58,929][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][2] marking unavailable shards as stale: [Q_y0KxXVQ6SgZVVC6SFJnQ] [2021-05-04T01:15:02,182][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][4] marking unavailable shards as stale: [f1oRpsE9SkSKPPiKZByR6Q] [2021-05-04T01:15:04,646][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][2] marking unavailable shards as stale: [WCrJiS45Ql-DkR1JEPlPIg] [2021-05-04T01:15:04,893][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][4] marking unavailable shards as stale: [o5OaWxeRQVi4GWSXyPH9Ww] [2021-05-04T01:15:06,015][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][4] marking unavailable shards as stale: [f3gQ1MD0T_CDrl50Wh4A9Q] [2021-05-04T01:15:39,607][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][2] marking unavailable shards as stale: [WE-WoTYfQDmLLzOC1HEnGA] [2021-05-04T01:15:40,089][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][0] marking unavailable shards as stale: [yY3RJVGqS3qq1vDDCCYw9A] [2021-05-04T01:15:40,090][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][4] marking unavailable shards as stale: [KeTLym7bSkKF3Gfhg8Pg7w] [2021-05-04T01:15:40,090][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultlog-v5][1] marking unavailable shards as stale: [KRs0tEHPRiCroO-B-FjKpg] [2021-05-04T01:15:43,172][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][0] marking unavailable shards as stale: [tsTkA4dlQJemVCyDIBzr1g] [2021-05-04T01:15:43,622][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [inventoryequipment-v5][1] marking unavailable shards as stale: [lIjMHoWtRGusHrDDbnxt5w] [2021-05-04T01:15:44,583][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [eventlog-v5][0] marking unavailable shards as stale: [4VMKWdtsQceQ8lHFAp84ow] [2021-05-04T01:15:45,372][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][0] marking unavailable shards as stale: [PXSvkWopQfKT9Eytfpq1Ng] [2021-05-04T01:15:48,772][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [connectionlog-v5][1] marking unavailable shards as stale: [23BhYdvyTsKX9YG0e2LpHQ] [2021-05-04T01:15:51,685][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][0] marking unavailable shards as stale: [BEYdfLJWT02XLWoBQbTMog] [2021-05-04T01:15:51,874][WARN ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-2] [gc][8328] overhead, spent [612ms] collecting in the last [1s] [2021-05-04T01:15:53,476][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [mediator-server-v5][1] marking unavailable shards as stale: [vwBdlVGySp-5zzi8X11ctw] [2021-05-04T01:15:54,211][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance24h-v5][0] marking unavailable shards as stale: [kKV0s86EQgOPEBVuc8WTNA] [2021-05-04T01:15:55,207][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [maintenancemode-v5][0] marking unavailable shards as stale: [2rZRTcOJQfuTWGyV6-4BJg] [2021-05-04T01:15:55,544][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [historicalperformance15min-v5][0] marking unavailable shards as stale: [5FzfmeaFSgO8MLH7ksYyIg] [2021-05-04T01:15:57,316][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][0] marking unavailable shards as stale: [_D-L8p34SEi_XWVNis9jzg] [2021-05-04T01:15:57,779][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [guicutthrough-v5][1] marking unavailable shards as stale: [MToxScl6T7K9I1Wn9EahVg] [2021-05-04T01:15:58,017][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][0] marking unavailable shards as stale: [EaiQ9FjzTw2-htMVXyRxFw] [2021-05-04T01:15:58,897][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [networkelement-connection-v5][1] marking unavailable shards as stale: [tWymRqjuT0C8_yNkrlBAsQ] [2021-05-04T01:15:59,338][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][0] marking unavailable shards as stale: [EjgynhjfSHKmcF4ez6LhFg] [2021-05-04T01:16:00,095][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-2] [faultcurrent-v5][2] marking unavailable shards as stale: [KRxF3A0CTLivJ8D92jRX1A] [2021-05-04T01:16:32,536][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 9, version: 1086, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:16:42,540][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1086] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:17:02,541][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 1086, reason: Publication{term=9, version=1086} [2021-05-04T01:17:02,544][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1086] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:17:12,548][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1087] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:17:32,554][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1087] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:17:32,557][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 9, version: 1088, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:17:32,605][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 9, version: 1088, reason: Publication{term=9, version=1088} [2021-05-04T01:17:32,611][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T01:19:02,677][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:19:02,680][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:19:04,205][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 9, version: 1089, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:19:04,283][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 9, version: 1088, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T01:19:04,283][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1089] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:19:04,478][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:19:14,287][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:19:24,289][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:19:34,292][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:19:44,295][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:19:44,665][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:19:54,297][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:02,682][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:20:02,682][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:20:02,683][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T01:20:04,298][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:14,300][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:24,302][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:34,303][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:39,101][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [126915ms] ago, timed out [116908ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36665] [2021-05-04T01:20:39,101][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [115907ms] ago, timed out [105899ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36713] [2021-05-04T01:20:39,101][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [104898ms] ago, timed out [94892ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36751] [2021-05-04T01:20:44,305][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:20:54,306][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:21:04,309][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 9, last-accepted version 1088 in term 9 [2021-05-04T01:21:10,078][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} elect leader, {dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 10, version: 1089, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:21:17,211][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [167556ms] ago, timed out [152542ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36651] [2021-05-04T01:21:17,212][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [164551ms] ago, timed out [149540ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36672] [2021-05-04T01:21:17,285][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [122706ms] ago, timed out [107695ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36879] [2021-05-04T01:21:17,286][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [164752ms] ago, timed out [149741ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [36676] [2021-05-04T01:21:17,286][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [104693ms] ago, timed out [89677ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [37028] [2021-05-04T01:21:17,290][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [104693ms] ago, timed out [89677ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [37030] [2021-05-04T01:21:20,083][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [9.8s] publication of cluster state version [1089] is still waiting for {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:21:40,085][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1089, reason: Publication{term=10, version=1089} [2021-05-04T01:21:40,091][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [29.8s] publication of cluster state version [1089] is still waiting for {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:21:50,175][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [9.8s] publication of cluster state version [1090] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:22:10,177][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [29.8s] publication of cluster state version [1090] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:22:10,180][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 10, version: 1091, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:22:10,303][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1091, reason: Publication{term=10, version=1091} [2021-05-04T01:28:03,666][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 10, version: 1093, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:28:13,671][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1093] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:28:33,671][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1093, reason: Publication{term=10, version=1093} [2021-05-04T01:28:33,675][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1093] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:28:43,679][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1094] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:29:03,683][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1094] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:29:08,918][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 10, version: 1095, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:29:15,671][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1095, reason: Publication{term=10, version=1095} [2021-05-04T01:29:41,479][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [10207ms] ago, timed out [200ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [40155] [2021-05-04T01:30:22,300][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 10, version: 1096, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:30:32,305][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1096] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:30:52,305][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1096, reason: Publication{term=10, version=1096} [2021-05-04T01:30:52,310][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1096] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:31:02,313][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1097] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:31:18,873][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [12019ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [40629] [2021-05-04T01:31:22,316][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [29.8s] publication of cluster state version [1097] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:31:32,322][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1098] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:31:52,326][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1098] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:32:22,310][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-2] node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [1096] completed [1.5m] ago [2021-05-04T01:32:22,327][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: lagging], term: 10, version: 1099, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:32:22,463][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1099, reason: Publication{term=10, version=1099} [2021-05-04T01:32:22,466][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [59.8s] (2 delayed shards) [2021-05-04T01:33:00,149][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 10, version: 1100, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:33:10,153][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1100] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:33:27,367][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [24626ms] ago, timed out [14612ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [41252] [2021-05-04T01:33:30,153][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1100, reason: Publication{term=10, version=1100} [2021-05-04T01:33:30,158][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1100] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:33:31,269][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [17615ms] ago, timed out [7608ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [41297] [2021-05-04T01:33:40,163][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1101] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:34:00,165][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1101] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:34:10,170][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1102] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:34:30,176][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1102] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:34:30,183][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 10, version: 1103, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:34:30,388][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 10, version: 1103, reason: Publication{term=10, version=1103} [2021-05-04T01:34:30,390][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [59.7s] (2 delayed shards) [2021-05-04T01:37:43,713][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [20415ms] ago, timed out [10406ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42487] [2021-05-04T01:38:14,864][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [14410ms] ago, timed out [4403ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42627] [2021-05-04T01:38:54,941][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 10, version: 1105, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:38:54,974][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 10, version: 1104, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T01:38:54,972][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1105] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:38:55,074][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:39:04,948][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:14,950][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:17,899][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:39:24,953][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:34,954][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:44,956][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:54,958][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 10, last-accepted version 1104 in term 10 [2021-05-04T01:39:58,590][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [70677ms] ago, timed out [55662ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42819] [2021-05-04T01:39:58,601][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [95700ms] ago, timed out [85692ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42722] [2021-05-04T01:39:58,602][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [84692ms] ago, timed out [74684ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42760] [2021-05-04T01:39:58,602][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [73679ms] ago, timed out [63671ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [42800] [2021-05-04T01:39:59,629][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 11, version: 1105, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T01:40:00,064][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 11, version: 1105, reason: Publication{term=11, version=1105} [2021-05-04T01:43:51,512][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 11, version: 1108, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:44:01,516][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1108] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:44:21,516][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 11, version: 1108, reason: Publication{term=11, version=1108} [2021-05-04T01:44:21,521][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1108] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:44:31,525][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1109] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:44:48,886][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [26432ms] ago, timed out [16425ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44646] [2021-05-04T01:44:49,284][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [15822ms] ago, timed out [5813ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44706] [2021-05-04T01:44:51,528][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1109] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:45:05,232][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [43644ms] ago, timed out [28629ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44631] [2021-05-04T01:45:10,248][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [19813ms] ago, timed out [9806ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44783] [2021-05-04T01:45:18,013][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [47446ms] ago, timed out [32429ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44684] [2021-05-04T01:45:28,021][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1110] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:45:48,025][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30.1s] publication of cluster state version [1110] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:45:48,155][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [28027ms] ago, timed out [18014ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44934] [2021-05-04T01:45:48,156][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [17013ms] ago, timed out [7006ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [44977] [2021-05-04T01:45:51,521][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-2] node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [1108] completed [1.5m] ago [2021-05-04T01:45:51,525][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: lagging], term: 11, version: 1111, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:45:51,566][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 11, version: 1111, reason: Publication{term=11, version=1111} [2021-05-04T01:45:51,568][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-2] scheduling reroute for delayed shards in [59.9s] (2 delayed shards) [2021-05-04T01:46:06,958][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-join[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} join existing leader], term: 11, version: 1112, delta: added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:46:16,961][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1112] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:46:36,961][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] added {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 11, version: 1112, reason: Publication{term=11, version=1112} [2021-05-04T01:46:36,964][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1112] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-05-04T01:46:46,967][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1113] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:46:48,145][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [10807ms] ago, timed out [800ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45344] [2021-05-04T01:47:06,969][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1113] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:47:16,984][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [10s] publication of cluster state version [1114] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:47:34,646][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [43054ms] ago, timed out [33048ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45423] [2021-05-04T01:47:34,647][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [32047ms] ago, timed out [22026ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45478] [2021-05-04T01:47:34,647][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [21025ms] ago, timed out [11018ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45530] [2021-05-04T01:47:34,648][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [43255ms] ago, timed out [28234ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45418] [2021-05-04T01:47:35,972][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [59065ms] ago, timed out [44055ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}], id [45331] [2021-05-04T01:47:36,987][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-2] after [30s] publication of cluster state version [1114] is still waiting for {dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-05-04T01:47:36,990][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr} reason: followers check retry count exceeded], term: 11, version: 1115, delta: removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}} [2021-05-04T01:47:38,206][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] removed {{dev-sdnrdb-master-1}{ZY_kNaGnTA6BB4xy93CdRw}{ak6z3cGkQ_ClYwA5IL5HpA}{fd00:100:0:0:0:0:0:8ad}{[fd00:100::8ad]:9300}{dmr}}, term: 11, version: 1115, reason: Publication{term=11, version=1115} [2021-05-04T01:48:32,701][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 11, version: 1116, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:48:32,773][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 11, version: 1115, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T01:48:32,773][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1116] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:48:32,778][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:48:38,213][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:48:42,773][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 11, last-accepted version 1115 in term 11 [2021-05-04T01:48:52,775][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 11, last-accepted version 1115 in term 11 [2021-05-04T01:49:02,777][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 11, last-accepted version 1115 in term 11 [2021-05-04T01:49:07,712][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [59457ms] ago, timed out [44445ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [45788] [2021-05-04T01:49:07,973][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [67269ms] ago, timed out [57256ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [45745] [2021-05-04T01:49:08,034][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [56455ms] ago, timed out [46447ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [45795] [2021-05-04T01:49:08,035][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [45446ms] ago, timed out [35440ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [45833] [2021-05-04T01:49:08,875][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 12, version: 1116, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T01:49:09,407][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 12, version: 1116, reason: Publication{term=12, version=1116} [2021-05-04T01:57:04,982][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [13817ms] ago, timed out [3804ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48306] [2021-05-04T01:57:06,297][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [26425ms] ago, timed out [11415ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48271] [2021-05-04T01:57:43,703][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 12, version: 1119, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T01:57:43,772][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 12, version: 1118, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T01:57:43,773][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1119] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:57:43,873][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T01:57:53,271][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [41640ms] ago, timed out [31626ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48388] [2021-05-04T01:57:53,668][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [31025ms] ago, timed out [21016ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48425] [2021-05-04T01:57:53,685][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [20015ms] ago, timed out [10008ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48463] [2021-05-04T01:57:53,709][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:03,711][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:04,860][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T01:58:13,713][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:23,715][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:33,716][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:43,718][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 12, last-accepted version 1118 in term 12 [2021-05-04T01:58:46,689][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [71869ms] ago, timed out [56858ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [48466] [2021-05-04T01:58:47,484][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 13, version: 1119, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T01:58:48,055][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 13, version: 1119, reason: Publication{term=13, version=1119} [2021-05-04T02:05:10,260][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 13, version: 1122, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:05:10,274][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 13, version: 1121, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:05:10,275][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1122] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:05:10,283][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:05:18,170][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:05:20,275][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 13, last-accepted version 1121 in term 13 [2021-05-04T02:05:30,277][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 13, last-accepted version 1121 in term 13 [2021-05-04T02:05:40,279][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 13, last-accepted version 1121 in term 13 [2021-05-04T02:05:50,281][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 13, last-accepted version 1121 in term 13 [2021-05-04T02:06:00,284][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 13, last-accepted version 1121 in term 13 [2021-05-04T02:06:05,667][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [87482ms] ago, timed out [77475ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [50668] [2021-05-04T02:06:05,668][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [76474ms] ago, timed out [66468ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [50704] [2021-05-04T02:06:05,668][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [65459ms] ago, timed out [55452ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [50736] [2021-05-04T02:06:05,950][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [77875ms] ago, timed out [62857ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [50699] [2021-05-04T02:06:06,596][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 14, version: 1122, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:06:07,164][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 14, version: 1122, reason: Publication{term=14, version=1122} [2021-05-04T02:08:07,190][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:08:09,381][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 14, version: 1126, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:08:09,387][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 14, version: 1125, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:08:09,387][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1126] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:08:09,476][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:08:19,390][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 14, last-accepted version 1125 in term 14 [2021-05-04T02:08:29,391][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 14, last-accepted version 1125 in term 14 [2021-05-04T02:08:31,111][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [53644ms] ago, timed out [43637ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [51676] [2021-05-04T02:08:31,112][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [42636ms] ago, timed out [32624ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [51730] [2021-05-04T02:08:31,186][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [31623ms] ago, timed out [21815ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [51762] [2021-05-04T02:08:31,291][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [54044ms] ago, timed out [39034ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [51673] [2021-05-04T02:08:31,613][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 15, version: 1126, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:08:32,201][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 15, version: 1126, reason: Publication{term=15, version=1126} [2021-05-04T02:15:02,300][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:15:05,319][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 15, version: 1129, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:15:05,373][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 15, version: 1128, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:15:05,373][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1129] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:15:05,474][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:15:07,472][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:15:15,373][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:15:25,376][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:15:35,378][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:15:45,380][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:15:55,382][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:16:02,306][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:16:05,383][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:16:15,385][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:16:25,387][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 15, last-accepted version 1128 in term 15 [2021-05-04T02:16:30,894][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [117703ms] ago, timed out [107696ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [53715] [2021-05-04T02:16:30,895][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [106695ms] ago, timed out [96688ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [53759] [2021-05-04T02:16:30,896][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [95687ms] ago, timed out [85675ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [53791] [2021-05-04T02:16:31,306][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [119104ms] ago, timed out [104093ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [53704] [2021-05-04T02:16:31,307][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [113900ms] ago, timed out [98890ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [53721] [2021-05-04T02:16:31,308][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [59054ms] ago, timed out [44041ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [54032] [2021-05-04T02:16:31,834][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 16, version: 1129, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:16:33,099][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 16, version: 1129, reason: Publication{term=16, version=1129} [2021-05-04T02:19:04,355][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:19:07,679][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 16, version: 1133, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:19:07,683][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 16, version: 1132, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:19:07,684][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1133] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:19:07,773][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:19:17,683][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 16, last-accepted version 1132 in term 16 [2021-05-04T02:19:27,593][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [51853ms] ago, timed out [41847ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [55058] [2021-05-04T02:19:27,685][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.203.32:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 16, last-accepted version 1132 in term 16 [2021-05-04T02:19:27,760][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [41046ms] ago, timed out [31027ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [55098] [2021-05-04T02:19:27,760][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [30026ms] ago, timed out [20014ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [55138] [2021-05-04T02:19:27,988][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [53454ms] ago, timed out [38644ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [55053] [2021-05-04T02:19:28,837][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 17, version: 1133, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:19:29,430][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 17, version: 1133, reason: Publication{term=17, version=1133} [2021-05-04T02:27:18,037][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [30027ms] ago, timed out [20016ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [57459] [2021-05-04T02:27:18,039][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [19015ms] ago, timed out [9008ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [57491] [2021-05-04T02:27:18,086][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [18215ms] ago, timed out [3203ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [57494] [2021-05-04T02:30:48,180][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T02:31:18,187][INFO ][o.e.c.r.a.DiskThresholdMonitor] [dev-sdnrdb-master-2] skipping monitor as a check is already in progress [2021-05-04T02:34:18,366][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:34:18,386][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:34:22,003][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 17, version: 1136, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:34:22,009][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 17, version: 1135, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:34:22,009][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1136] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:34:22,074][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:34:24,557][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [36236ms] ago, timed out [21217ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59409] [2021-05-04T02:34:24,564][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [36236ms] ago, timed out [21217ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59405] [2021-05-04T02:34:24,566][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [34635ms] ago, timed out [24624ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59414] [2021-05-04T02:34:24,567][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [23618ms] ago, timed out [13608ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59446] [2021-05-04T02:34:24,567][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [12608ms] ago, timed out [2601ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59492] [2021-05-04T02:34:25,051][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 18, version: 1136, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:34:25,583][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 18, version: 1136, reason: Publication{term=18, version=1136} [2021-05-04T02:36:18,414][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:36:18,482][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:36:19,673][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 18, version: 1140, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:36:19,678][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 18, version: 1139, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:36:19,678][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1140] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:36:19,682][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:36:25,653][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:36:29,679][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:36:39,681][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:36:49,683][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:36:59,685][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:09,686][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:18,419][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:37:18,484][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:37:19,688][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:29,689][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:39,690][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:49,692][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:37:59,694][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:09,696][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:19,698][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:29,699][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:39,701][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:49,703][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:38:59,705][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:39:09,706][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:39:16,150][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [208396ms] ago, timed out [198390ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59987] [2021-05-04T02:39:16,151][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [197388ms] ago, timed out [187379ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60039] [2021-05-04T02:39:16,151][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [186378ms] ago, timed out [176371ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60085] [2021-05-04T02:39:19,505][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [211000ms] ago, timed out [195990ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [59998] [2021-05-04T02:39:19,520][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [150946ms] ago, timed out [135926ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60341] [2021-05-04T02:39:19,521][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [211000ms] ago, timed out [195990ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60001] [2021-05-04T02:39:19,522][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [151146ms] ago, timed out [136126ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60336] [2021-05-04T02:39:19,524][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-2] Received response for a request that has timed out, sent [203796ms] ago, timed out [188783ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}], id [60018] [2021-05-04T02:39:19,708][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.8.173:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 18, last-accepted version 1139 in term 18 [2021-05-04T02:39:20,170][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr} elect leader, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 19, version: 1140, delta: master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]} [2021-05-04T02:39:20,622][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [], current [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}]}, term: 19, version: 1140, reason: Publication{term=19, version=1140} [2021-05-04T02:47:47,762][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded], term: 19, version: 1143, delta: removed {{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}} [2021-05-04T02:47:47,781][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-2] master node changed {previous [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}], current []}, term: 19, version: 1142, reason: becoming candidate: Publication.onCompletion(false) [2021-05-04T02:47:47,873][WARN ][o.e.c.s.MasterService ] [dev-sdnrdb-master-2] failing [node-left[{dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [1143] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:47:47,877][ERROR][o.e.c.c.Coordinator ] [dev-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-05-04T02:47:54,810][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-05-04T02:47:57,781][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:07,783][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:17,785][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:27,787][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:37,788][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:47,790][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.203.32:9300, 10.242.35.140:9300, 10.242.8.173:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:48:57,791][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:07,793][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:17,795][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:27,797][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:37,799][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:47,801][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.35.140:9300, 10.242.8.173:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:49:57,803][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:07,804][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:17,806][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:27,807][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:37,809][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:47,810][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.203.32:9300, 10.242.35.140:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:50:57,813][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:51:07,814][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:51:17,816][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19 [2021-05-04T02:51:27,818][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [ZY_kNaGnTA6BB4xy93CdRw, DXyNlIdYTHuhjDcRF-thrg, GK-vkZRORy2PWdyDEikGTg], have discovered [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}] which is not a quorum; discovery will continue using [10.242.193.227:9300, 10.242.8.173:9300, 10.242.35.140:9300, 10.242.203.32:9300] from hosts providers and [{dev-sdnrdb-master-2}{GK-vkZRORy2PWdyDEikGTg}{PvVsULpEQOKB3HxuZqPhSQ}{fd00:100:0:0:0:0:0:c1e3}{[fd00:100::c1e3]:9300}{dmr}, {dev-sdnrdb-master-0}{DXyNlIdYTHuhjDcRF-thrg}{7YUkOSxdRSS1le8C_zctwQ}{fd00:100:0:0:0:0:0:cb20}{[fd00:100::cb20]:9300}{dmr}] from last-known cluster state; node term 19, last-accepted version 1142 in term 19