By type
[38;5;6m [38;5;5m22:51:41.19 [0m [38;5;6m [38;5;5m22:51:41.20 [0m[1mWelcome to the Bitnami elasticsearch container[0m [38;5;6m [38;5;5m22:51:41.28 [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch[0m [38;5;6m [38;5;5m22:51:41.29 [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch/issues[0m [38;5;6m [38;5;5m22:51:41.30 [0m [38;5;6m [38;5;5m22:51:41.31 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch setup ** [38;5;6m [38;5;5m22:51:41.69 [0m[38;5;2mINFO [0m ==> Configuring/Initializing Elasticsearch... [38;5;6m [38;5;5m22:51:42.08 [0m[38;5;2mINFO [0m ==> Setting default configuration [38;5;6m [38;5;5m22:51:42.20 [0m[38;5;2mINFO [0m ==> Configuring Elasticsearch cluster settings... [38;5;6m [38;5;5m22:51:42.50 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::2e9a 10.242.46.154, will use fd00:100::2e9a [38;5;6m [38;5;5m22:51:42.70 [0m[38;5;3mWARN [0m ==> Found more than one IP address associated to hostname dev-sdnrdb-master-0: fd00:100::2e9a 10.242.46.154, will use fd00:100::2e9a OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [38;5;6m [38;5;5m22:52:01.59 [0m[38;5;2mINFO [0m ==> ** Elasticsearch setup finished! ** [38;5;6m [38;5;5m22:52:01.79 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-07-07T22:52:39,685][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.15.0-117-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-07-07T22:52:39,688][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM home [/opt/bitnami/java] [2021-07-07T22:52:39,688][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-17491359726249479522, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-07-07T22:52:58,190][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [aggs-matrix-stats] [2021-07-07T22:52:58,191][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [analysis-common] [2021-07-07T22:52:58,191][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [geo] [2021-07-07T22:52:58,192][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-common] [2021-07-07T22:52:58,193][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-geoip] [2021-07-07T22:52:58,194][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [ingest-user-agent] [2021-07-07T22:52:58,194][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [kibana] [2021-07-07T22:52:58,195][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-expression] [2021-07-07T22:52:58,196][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-mustache] [2021-07-07T22:52:58,287][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [lang-painless] [2021-07-07T22:52:58,288][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [mapper-extras] [2021-07-07T22:52:58,288][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [parent-join] [2021-07-07T22:52:58,290][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [percolator] [2021-07-07T22:52:58,291][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [rank-eval] [2021-07-07T22:52:58,291][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [reindex] [2021-07-07T22:52:58,292][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [repository-url] [2021-07-07T22:52:58,292][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [tasks] [2021-07-07T22:52:58,293][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded module [transport-netty4] [2021-07-07T22:52:58,294][INFO ][o.e.p.PluginsService ] [dev-sdnrdb-master-0] loaded plugin [repository-s3] [2021-07-07T22:52:59,088][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (172.16.10.75:/dockerdata-nfs/dev/elastic-master-0)]], net usable_space [180.6gb], net total_space [195.8gb], types [nfs4] [2021-07-07T22:52:59,090][INFO ][o.e.e.NodeEnvironment ] [dev-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true] [2021-07-07T22:52:59,489][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] node name [dev-sdnrdb-master-0], node ID [X-jQFeQAQKS1fKuMvjK-zg], cluster name [sdnrdb-cluster] [2021-07-07T22:53:43,990][INFO ][o.e.t.NettyAllocator ] [dev-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-07-07T22:53:44,707][INFO ][o.e.d.DiscoveryModule ] [dev-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings] [2021-07-07T22:53:48,602][WARN ][o.e.g.DanglingIndicesState] [dev-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-07-07T22:53:50,791][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] initialized [2021-07-07T22:53:50,792][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] starting ... [2021-07-07T22:53:52,697][INFO ][o.e.t.TransportService ] [dev-sdnrdb-master-0] publish_address {[fd00:100::2e9a]:9300}, bound_addresses {[::]:9300} [2021-07-07T22:53:54,602][INFO ][o.e.b.BootstrapChecks ] [dev-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-07-07T22:54:03,350][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][12] overhead, spent [344ms] collecting in the last [1s] [2021-07-07T22:54:04,818][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]; discovery will continue using [10.242.46.154:9300, 10.242.208.203:9300, 10.242.157.81:9300, 10.242.214.97:9300] from hosts providers and [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-07-07T22:54:14,821][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]; discovery will continue using [10.242.46.154:9300, 10.242.208.203:9300, 10.242.157.81:9300, 10.242.214.97:9300] from hosts providers and [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-07-07T22:54:24,828][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]; discovery will continue using [10.242.46.154:9300, 10.242.208.203:9300, 10.242.157.81:9300, 10.242.214.97:9300] from hosts providers and [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-07-07T22:54:34,832][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]; discovery will continue using [10.242.46.154:9300, 10.242.208.203:9300, 10.242.157.81:9300, 10.242.214.97:9300] from hosts providers and [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-07-07T22:54:44,835][WARN ][o.e.c.c.ClusterFormationFailureHelper] [dev-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [dev-sdnrdb-master-0, dev-sdnrdb-master-1, dev-sdnrdb-master-2] to bootstrap a cluster: have discovered [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]; discovery will continue using [10.242.46.154:9300, 10.242.208.203:9300, 10.242.157.81:9300, 10.242.214.97:9300] from hosts providers and [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2021-07-07T22:54:52,976][INFO ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] setting initial configuration to VotingConfiguration{6jo_xEULR8iaZ46qgUoULg,X-jQFeQAQKS1fKuMvjK-zg,{bootstrap-placeholder}-dev-sdnrdb-master-2} [2021-07-07T22:54:54,886][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr} elect leader, {dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr}} [2021-07-07T22:54:55,890][INFO ][o.e.c.c.CoordinationState] [dev-sdnrdb-master-0] cluster UUID set to [RAZ9SwJ5QwKCs68aJaZ-vg] [2021-07-07T22:54:56,804][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] master node changed {previous [], current [{dev-sdnrdb-master-0}{X-jQFeQAQKS1fKuMvjK-zg}{nVRbvxUJSESFQJmBGP434Q}{fd00:100:0:0:0:0:0:2e9a}{[fd00:100::2e9a]:9300}{dmr}]}, added {{dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr}}, term: 1, version: 1, reason: Publication{term=1, version=1} [2021-07-07T22:54:57,195][INFO ][o.e.h.AbstractHttpServerTransport] [dev-sdnrdb-master-0] publish_address {[fd00:100::2e9a]:9200}, bound_addresses {[::]:9200} [2021-07-07T22:54:57,195][INFO ][o.e.n.Node ] [dev-sdnrdb-master-0] started [2021-07-07T22:54:57,994][INFO ][o.e.g.GatewayService ] [dev-sdnrdb-master-0] recovered [0] indices into cluster_state [2021-07-07T22:55:27,538][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][96] overhead, spent [301ms] collecting in the last [1s] [2021-07-07T22:56:15,906][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 3, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T22:56:17,003][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 3, reason: Publication{term=1, version=3} [2021-07-07T22:56:29,011][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r} join existing leader], term: 1, version: 5, delta: added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r}} [2021-07-07T22:56:29,401][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r}}, term: 1, version: 5, reason: Publication{term=1, version=5} [2021-07-07T22:57:02,228][INFO ][o.e.c.s.ClusterSettings ] [dev-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false] [2021-07-07T22:57:04,195][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:16,992][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][205] overhead, spent [302ms] collecting in the last [1s] [2021-07-07T22:57:20,099][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance15min-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:24,903][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [mediator-server-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:31,688][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [historicalperformance24h-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:35,691][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [guicutthrough-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:40,499][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [eventlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:46,911][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [faultcurrent-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:51,598][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [maintenancemode-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:57:56,669][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v5][0], [maintenancemode-v5][2], [maintenancemode-v5][1], [maintenancemode-v5][4]]]). [2021-07-07T22:57:57,090][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [inventoryequipment-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:58:01,743][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [networkelement-connection-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:58:08,779][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[networkelement-connection-v5][1], [networkelement-connection-v5][2], [networkelement-connection-v5][0]]]). [2021-07-07T22:58:09,097][INFO ][o.e.c.m.MetadataCreateIndexService] [dev-sdnrdb-master-0] [connectionlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-07-07T22:58:13,316][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[connectionlog-v5][0], [connectionlog-v5][3]]]). [2021-07-07T23:10:24,337][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [16031ms] ago, timed out [6013ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4539] [2021-07-07T23:10:24,341][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [17031ms] ago, timed out [2009ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4526] [2021-07-07T23:11:07,326][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-07-07T23:11:12,510][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15070ms] ago, timed out [5013ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4717] [2021-07-07T23:11:14,129][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [21876ms] ago, timed out [6816ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4693] [2021-07-07T23:12:08,646][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15617ms] ago, timed out [5604ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4925] [2021-07-07T23:12:12,525][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-07-07T23:12:15,511][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33042ms] ago, timed out [18024ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [4889] [2021-07-07T23:12:50,291][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 77, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T23:12:51,701][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33699ms] ago, timed out [23679ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [5005] [2021-07-07T23:12:53,269][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 77, reason: Publication{term=1, version=77} [2021-07-07T23:12:53,608][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] primary-replica resync completed with 0 operations [2021-07-07T23:12:53,688][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][2] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,005][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][1] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,185][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [eventlog-v5][4] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,196][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,289][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,311][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][0] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,393][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][0] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,499][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,514][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,592][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][0] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,597][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [55.3s] (37 delayed shards) [2021-07-07T23:12:54,890][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,902][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-07-07T23:12:54,903][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-07-07T23:13:50,892][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [uMCmVXIlQvyDislWfLorOw] [2021-07-07T23:13:50,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [Ibg20XzER2eG2hcLTlvzVw] [2021-07-07T23:13:51,894][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][0] marking unavailable shards as stale: [8GJs-fhRTlCazNYyrU91gQ] [2021-07-07T23:13:51,895][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][0] marking unavailable shards as stale: [lae_zFQaQwK8mHgEo6w3_w] [2021-07-07T23:13:54,014][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [QDkK8SZMQ3WDIA0yjvzrsA] [2021-07-07T23:13:54,014][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [kfuOEljwR1atSv_XVzjL8A] [2021-07-07T23:13:54,710][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [jansw0DiRMqPOKO50lsSlQ] [2021-07-07T23:13:56,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [WQ3AY4GETk2BqNx-pQxEEg] [2021-07-07T23:13:57,770][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [B8naNIDkToWcvQiz_L4JYA] [2021-07-07T23:13:57,771][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [XViU7F1YTACvsbp_4-Oqbw] [2021-07-07T23:13:59,408][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][0] marking unavailable shards as stale: [0fJSs783R4ygZbVKjJ9PnQ] [2021-07-07T23:13:59,889][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [XLCfGB6_Sj6-Wq22Ijv-Lg] [2021-07-07T23:14:00,787][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [aqq3jseTTSOOVZZKc99erg] [2021-07-07T23:14:00,788][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][0] marking unavailable shards as stale: [6kKWwriDRU6DHfeCJO85vA] [2021-07-07T23:14:03,907][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [J0C28PYqTAKKA9f5abv3Pw] [2021-07-07T23:14:04,210][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [c9Qs0x_TQhinJ7W-43841w] [2021-07-07T23:14:04,211][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [fLbJKSqdQ6u033xgJE2wdQ] [2021-07-07T23:14:04,708][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [9UDm3BaHRgevzU0dOrcIBw] [2021-07-07T23:14:07,470][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][0] marking unavailable shards as stale: [Mjla0Vp-QwWQ6bUYJGi5Qw] [2021-07-07T23:14:08,104][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [qQ9xDLCDQay8xKlUQQDK_w] [2021-07-07T23:14:08,105][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [jmqc9Z3_SPWX64Q3YQpliQ] [2021-07-07T23:14:08,106][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [mNHBvko1Tb2qfBjAbcB7_g] [2021-07-07T23:14:10,486][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [TiKL9ZxZTX6fx_4Rmxkswg] [2021-07-07T23:14:10,911][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [_WauInPaQliEDqQm3OC69g] [2021-07-07T23:14:10,912][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [yWWhwMeCQ0SX0JVgk3us4g] [2021-07-07T23:14:10,913][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][0] marking unavailable shards as stale: [knZKzWiOQwWcY0U5AkHhcA] [2021-07-07T23:14:13,905][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [pmw5ISb5SsCKe2Y09vAjLA] [2021-07-07T23:14:14,287][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [7yHO52WNTTuC3E3e1rTlsg] [2021-07-07T23:14:14,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [unZfc0CAR2ejsRML0XgGPA] [2021-07-07T23:14:14,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][1] marking unavailable shards as stale: [ZYnSnTRJRb2_wF3bNKhZgA] [2021-07-07T23:14:16,286][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][0] marking unavailable shards as stale: [pNK8alkiSuOx5qr4VjALSA] [2021-07-07T23:14:16,589][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][0] marking unavailable shards as stale: [JEYPT3oyS-S4qBQY44G75A] [2021-07-07T23:14:16,589][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [oqhc3CDCStGPb-4ipoeVuA] [2021-07-07T23:14:19,199][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [ZKE0XREnRy2wD27aVqidkg] [2021-07-07T23:14:19,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [k1bFxBn_SC2Iw4XtW2znIw] [2021-07-07T23:14:20,088][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Ac0YrYIJSdO69RnQqDZ8Wg] [2021-07-07T23:14:20,089][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [-eEA0SkYTyKupUTqM0eJuQ] [2021-07-07T23:14:21,788][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][3], [faultlog-v5][4]]]). [2021-07-07T23:28:31,549][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 134, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T23:28:41,557][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [134] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r} [SENT_APPLY_COMMIT] [2021-07-07T23:29:01,555][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 134, reason: Publication{term=1, version=134} [2021-07-07T23:29:01,564][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30.1s] publication of cluster state version [134] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r} [SENT_APPLY_COMMIT] [2021-07-07T23:29:09,759][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [38327ms] ago, timed out [28295ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [9782] [2021-07-07T23:29:11,597][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [135] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-07T23:29:31,625][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [135] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-07T23:29:31,687][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 136, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T23:29:32,066][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 136, reason: Publication{term=1, version=136} [2021-07-07T23:31:57,447][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}] org.elasticsearch.transport.ReceiveTimeoutTransportException: [dev-sdnrdb-master-2][[fd00:100::d661]:9300][internal:cluster/coordination/join/validate] request_id [10433] timed out after [60096ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-07-07T23:33:23,715][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [146301ms] ago, timed out [86205ms] ago, action [internal:cluster/coordination/join/validate], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [10433] [2021-07-07T23:40:25,450][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 137, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T23:40:35,456][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [137] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r} [SENT_APPLY_COMMIT] [2021-07-07T23:40:55,458][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 137, reason: Publication{term=1, version=137} [2021-07-07T23:40:55,468][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [137] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST], {dev-sdnrdb-master-1}{6jo_xEULR8iaZ46qgUoULg}{L15YYS1rRHyUpvUuDD3tWw}{fd00:100:0:0:0:0:0:d0cb}{[fd00:100::d0cb]:9300}{dmr} [SENT_APPLY_COMMIT], {dev-sdnrdb-coordinating-only-65cbf8cbbd-zpf5h}{_QU-u0zfQce3VtO7Oucdug}{HF1ELcSdQuS5GO-hEvzu8A}{fd00:100:0:0:0:0:0:9d51}{[fd00:100::9d51]:9300}{r} [SENT_APPLY_COMMIT] [2021-07-07T23:41:05,505][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [138] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-07T23:41:10,468][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-07-07T23:41:20,210][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [43717ms] ago, timed out [33705ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [12909] [2021-07-07T23:41:20,212][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [32704ms] ago, timed out [22634ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [12956] [2021-07-07T23:41:20,213][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [54769ms] ago, timed out [44718ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [12853] [2021-07-07T23:41:22,009][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [19226ms] ago, timed out [4403ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [13033] [2021-07-07T23:41:22,010][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26497ms] ago, timed out [11412ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [12990] [2021-07-07T23:41:25,533][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30.1s] publication of cluster state version [138] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-07T23:41:25,538][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 139, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-07T23:41:25,632][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 139, reason: Publication{term=1, version=139} [2021-07-07T23:41:25,651][WARN ][o.e.c.c.Coordinator ] [dev-sdnrdb-master-0] failed to validate incoming join request from node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}] org.elasticsearch.transport.NodeDisconnectedException: [dev-sdnrdb-master-2][[fd00:100::d661]:9300][internal:cluster/coordination/join/validate] disconnected [2021-07-08T00:30:31,689][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 140, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:30:34,105][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 140, reason: Publication{term=1, version=140} [2021-07-08T00:38:04,065][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18432ms] ago, timed out [8425ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [28431] [2021-07-08T00:41:14,849][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26843ms] ago, timed out [16835ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [29205] [2021-07-08T00:41:14,853][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15834ms] ago, timed out [5825ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [29252] [2021-07-08T00:42:50,306][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [10410ms] ago, timed out [401ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [29682] [2021-07-08T00:44:39,599][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 202, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:44:41,572][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 202, reason: Publication{term=1, version=202} [2021-07-08T00:44:41,892][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][2] primary-replica resync completed with 0 operations [2021-07-08T00:44:41,894][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,006][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,103][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,194][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,204][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][4] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,213][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,307][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,310][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.2s] (37 delayed shards) [2021-07-08T00:44:42,491][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] primary-replica resync completed with 0 operations [2021-07-08T00:44:42,506][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] primary-replica resync completed with 0 operations [2021-07-08T00:45:17,619][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 203, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:45:27,624][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [203] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:45:47,624][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 203, reason: Publication{term=1, version=203} [2021-07-08T00:45:47,693][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [203] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:45:57,805][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [204] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:46:17,811][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [204] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:46:17,822][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 205, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:46:18,167][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 205, reason: Publication{term=1, version=205} [2021-07-08T00:46:18,409][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [tbQAwCcMQI2Vfqia83RniQ] [2021-07-08T00:46:18,410][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [MVIO4MP6RUuj2qkxHo4A3w] [2021-07-08T00:46:19,490][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [DV6uENlqS5anQ99GVDwR5w] [2021-07-08T00:46:19,491][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [ujhgx6XGTLWmGCS6aNc33Q] [2021-07-08T00:46:21,518][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][1] marking unavailable shards as stale: [1Ew9fnB7Q5uOCVmetenXWQ] [2021-07-08T00:46:21,905][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [NpaEeDumTaCZ_yca6YR97Q] [2021-07-08T00:46:21,906][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [FQ15PETaRtO_9RZqEhUmPQ] [2021-07-08T00:46:22,721][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [4FQkdTtEReOWmg0SLAEoNg] [2021-07-08T00:46:24,508][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [CpTnKeHqTCuBt-iG3vByoQ] [2021-07-08T00:46:26,286][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [5UBhtsbfRVeYuZf2FeIHpg] [2021-07-08T00:46:27,345][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [Fd8KzIulR7u_wGt_tue_1A] [2021-07-08T00:46:27,346][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [fjGFkw4dT42kpo-8DVzxUA] [2021-07-08T00:46:27,934][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [tVCWb38-StWC33pn06OB9g] [2021-07-08T00:46:29,341][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [86ToXeSUSdKMPLKam6w7nw] [2021-07-08T00:46:29,773][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [UZDcIFKXSHet63n9fJ_BtA] [2021-07-08T00:46:30,789][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [B6dQ3UpSTY6FjpKr2eDdnQ] [2021-07-08T00:46:30,790][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [GzINuwu4R2mRx5qKb8f4Lg] [2021-07-08T00:46:32,686][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [tQ8CCQA1R466QHJDeErKGg] [2021-07-08T00:46:33,367][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [S4n57VQ1SsqFUU1APLPgKA] [2021-07-08T00:46:34,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][4] marking unavailable shards as stale: [Yf8t1N_6QMiwpAJPDCYWeQ] [2021-07-08T00:46:34,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [2fi2xxP2RteULSu-jTvDFw] [2021-07-08T00:46:34,624][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [zomObj-1Ty-4vQXSUiN1lw] [2021-07-08T00:46:35,742][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [NCO8hXCrSXa95lGcVW57MQ] [2021-07-08T00:46:36,067][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [jsaF--4YRI24UPdbn-VeMQ] [2021-07-08T00:46:37,352][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [29F2XZk3Rs2VD_CT78cxWQ] [2021-07-08T00:46:37,353][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [ohGqcnhtTjGyd4BgBC2uzw] [2021-07-08T00:46:37,713][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [Cq94XvdNSCWfTsMLi-gd2A] [2021-07-08T00:46:39,558][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [AGAoyScISTaDVoZ6RjwrLQ] [2021-07-08T00:46:39,916][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [4o8WCzk8QWGPctDXnlIWeA] [2021-07-08T00:46:40,314][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [s8BjTtsMTU205TftTJZP2g] [2021-07-08T00:46:42,098][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [4aQhEBWrQuCzSFm26QNM8A] [2021-07-08T00:46:42,791][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [QBX5_J5HSruE-ISgbhAfJw] [2021-07-08T00:46:43,921][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [chALYWloRJuzXrT2HWITHg] [2021-07-08T00:46:43,922][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [uIBkWEKpTcK7OrPdze3S8g] [2021-07-08T00:46:46,611][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [mR6QFG3oQx6lZL8X9h6QLg] [2021-07-08T00:46:48,195][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [IEP_c99mQm6vfT0NCQyO0w] [2021-07-08T00:46:48,889][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [TJnRE3BaTUykJ7ejI12Fsw] [2021-07-08T00:46:49,693][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][4]]]). [2021-07-08T00:46:50,319][INFO ][o.e.m.j.JvmGcMonitorService] [dev-sdnrdb-master-0] [gc][6772] overhead, spent [593ms] collecting in the last [1.2s] [2021-07-08T00:47:47,823][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 269, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:47:50,503][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 269, reason: Publication{term=1, version=269} [2021-07-08T00:48:00,517][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [270] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T00:48:20,541][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [270] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T00:53:02,931][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-07-08T00:53:04,616][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 333, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:53:06,372][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 333, reason: Publication{term=1, version=333} [2021-07-08T00:53:06,488][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,498][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,598][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [mediator-server-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,606][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,709][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [faultlog-v5][1] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,787][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [connectionlog-v5][4] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,805][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,911][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-07-08T00:53:06,919][INFO ][o.e.c.r.DelayedAllocationService] [dev-sdnrdb-master-0] scheduling reroute for delayed shards in [57.6s] (37 delayed shards) [2021-07-08T00:53:07,094][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-07-08T00:53:07,111][INFO ][o.e.i.s.IndexShard ] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] primary-replica resync completed with 0 operations [2021-07-08T00:54:04,820][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][3] marking unavailable shards as stale: [RsntoDc2QvWzEnvz-iNd_g] [2021-07-08T00:54:05,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][2] marking unavailable shards as stale: [nZ1lESKmTP2Wjcbjets3iA] [2021-07-08T00:54:05,308][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][4] marking unavailable shards as stale: [8FiRE-L0QjqrDS0x7KXbxw] [2021-07-08T00:54:05,309][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [connectionlog-v5][3] marking unavailable shards as stale: [NvKXOBnIRHOwbMKkeQeqxQ] [2021-07-08T00:54:06,914][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][2] marking unavailable shards as stale: [kM3DE5bTRDeh9rNLD0dt5g] [2021-07-08T00:54:07,135][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [networkelement-connection-v5][4] marking unavailable shards as stale: [YfYbvl4kTeunSW7F8saJGw] [2021-07-08T00:54:08,631][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][3] marking unavailable shards as stale: [YrpAX2jxSm2r4IVqcWZbew] [2021-07-08T00:54:08,631][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][1] marking unavailable shards as stale: [bZrLr1WXTZyGJi3PyX1Msw] [2021-07-08T00:54:09,066][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][2] marking unavailable shards as stale: [5bbs13kRQEyWvlqaWVPoyw] [2021-07-08T00:54:10,338][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [inventoryequipment-v5][4] marking unavailable shards as stale: [jE5PiSb1SJWBw2IAsoKRhQ] [2021-07-08T00:54:10,837][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][2] marking unavailable shards as stale: [gceUk4HhRJO9Ho0gnGMj_g] [2021-07-08T00:54:11,832][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][3] marking unavailable shards as stale: [CMO2X-a1Sc61TNqcWWk-JA] [2021-07-08T00:54:11,833][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][1] marking unavailable shards as stale: [5phzig_cQsK_q271xisUCQ] [2021-07-08T00:54:12,434][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][2] marking unavailable shards as stale: [a3RkxLSMTUO_gQuVAHPXug] [2021-07-08T00:54:13,716][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [maintenancemode-v5][4] marking unavailable shards as stale: [yd5_WRG3QJiwseKYYVMjuw] [2021-07-08T00:54:14,332][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][3] marking unavailable shards as stale: [HeBMlEo1Qn-5LWIwsT1xkQ] [2021-07-08T00:54:14,920][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][1] marking unavailable shards as stale: [QTYYaei6QEelVUEWaLbHtw] [2021-07-08T00:54:14,921][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultcurrent-v5][4] marking unavailable shards as stale: [Lo0_o9JvQqOIs9a0hurz-g] [2021-07-08T00:54:15,777][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][2] marking unavailable shards as stale: [0ScmXCujS9qrlgd2NSym-Q] [2021-07-08T00:54:16,499][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][4] marking unavailable shards as stale: [a3BOLFEmRJSmqsDHeR5v0Q] [2021-07-08T00:54:18,668][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [eventlog-v5][3] marking unavailable shards as stale: [LgAz-9QKTMWKu7twf6B10g] [2021-07-08T00:54:18,668][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][2] marking unavailable shards as stale: [z3L0UvkUTlStmHV53XolEg] [2021-07-08T00:54:19,765][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][2] marking unavailable shards as stale: [rx2i27zZQVi-euoMA65-0w] [2021-07-08T00:54:21,428][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][3] marking unavailable shards as stale: [OO86oM1kRgqBDf14unr0HA] [2021-07-08T00:54:22,288][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [guicutthrough-v5][3] marking unavailable shards as stale: [SE5tjUKiQ4KBNUEQRcoeSQ] [2021-07-08T00:54:22,904][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][1] marking unavailable shards as stale: [I_dl84bCRSeFtWIzmNNQww] [2021-07-08T00:54:22,905][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][4] marking unavailable shards as stale: [3xzQUPLxRO6JXnjIIMKSgw] [2021-07-08T00:54:24,017][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][3] marking unavailable shards as stale: [ZvB-Ism7RP67qTLHU5JU9A] [2021-07-08T00:54:24,907][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance24h-v5][4] marking unavailable shards as stale: [GvZpnBwCTK2llEPqnKYufA] [2021-07-08T00:54:25,290][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][2] marking unavailable shards as stale: [KYmYE53RR3mPx2a_fdD_dw] [2021-07-08T00:54:28,071][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][2] marking unavailable shards as stale: [n9cHByF_Sp69He00i3NMBg] [2021-07-08T00:54:28,398][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [mediator-server-v5][3] marking unavailable shards as stale: [ZkY9wKGHT6mY-wkonSg0fg] [2021-07-08T00:54:29,526][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][3] marking unavailable shards as stale: [Gg3gn7zGRKCW_7fgAiYB0A] [2021-07-08T00:54:29,891][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][4] marking unavailable shards as stale: [36QAVFGXQkKD3Wyg8nkFTw] [2021-07-08T00:54:30,403][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [historicalperformance15min-v5][2] marking unavailable shards as stale: [1KZs6vaJQvicCbTUaryuqw] [2021-07-08T00:54:31,329][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][4] marking unavailable shards as stale: [_IMPFNNQTr2EIs06qmxLiQ] [2021-07-08T00:54:31,876][WARN ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] [faultlog-v5][1] marking unavailable shards as stale: [IWsU5u0yQe2ZdzLX78PbCg] [2021-07-08T00:54:32,201][INFO ][o.e.c.r.a.AllocationService] [dev-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][1]]]). [2021-07-08T00:56:22,185][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 398, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:56:32,194][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [398] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:56:52,196][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 398, reason: Publication{term=1, version=398} [2021-07-08T00:56:52,209][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [398] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:57:02,297][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [9.8s] publication of cluster state version [399] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:57:07,209][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-07-08T00:57:08,397][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [44177ms] ago, timed out [34314ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [35407] [2021-07-08T00:57:08,412][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [33307ms] ago, timed out [23297ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [35460] [2021-07-08T00:57:08,413][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [22296ms] ago, timed out [12221ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [35505] [2021-07-08T00:57:18,109][WARN ][o.e.c.InternalClusterInfoService] [dev-sdnrdb-master-0] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-07-08T00:57:18,337][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [26097ms] ago, timed out [11012ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [35531] [2021-07-08T00:57:21,427][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [18423ms] ago, timed out [3403ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [35589] [2021-07-08T00:57:22,320][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [29.8s] publication of cluster state version [399] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T00:57:22,325][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 400, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T00:57:22,466][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 400, reason: Publication{term=1, version=400} [2021-07-08T01:01:17,914][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 401, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:01:27,922][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [401] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:01:45,415][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [12015ms] ago, timed out [2001ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [36768] [2021-07-08T01:01:47,923][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 401, reason: Publication{term=1, version=401} [2021-07-08T01:01:47,931][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [401] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:01:57,940][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [402] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:02:09,549][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15834ms] ago, timed out [5819ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [36880] [2021-07-08T01:02:17,963][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [402] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:02:25,613][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 403, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:02:25,670][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 403, reason: Publication{term=1, version=403} [2021-07-08T01:07:02,709][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 404, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:07:12,713][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [404] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:07:32,713][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 404, reason: Publication{term=1, version=404} [2021-07-08T01:07:32,719][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [404] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:07:42,796][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [405] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:08:02,822][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [405] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:08:33,313][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [11618ms] ago, timed out [1602ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [38608] [2021-07-08T01:09:02,720][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-0] node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}] is lagging at cluster state version [0], although publication of cluster state version [404] completed [1.5m] ago [2021-07-08T01:09:02,728][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: lagging], term: 1, version: 406, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:09:02,865][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 406, reason: Publication{term=1, version=406} [2021-07-08T01:10:34,018][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 407, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:10:44,025][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [407] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:10:58,022][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [13223ms] ago, timed out [3208ms] ago, action [internal:coordination/fault_detection/follower_check], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [39263] [2021-07-08T01:11:01,633][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 407, reason: Publication{term=1, version=407} [2021-07-08T01:11:11,641][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [408] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:11:31,669][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [408] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_APPLY_COMMIT] [2021-07-08T01:12:04,202][WARN ][o.e.t.TransportService ] [dev-sdnrdb-master-0] Received response for a request that has timed out, sent [15216ms] ago, timed out [200ms] ago, action [cluster:monitor/nodes/stats[n]], node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}], id [39585] [2021-07-08T01:13:01,671][WARN ][o.e.c.c.LagDetector ] [dev-sdnrdb-master-0] node [{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}] is lagging at cluster state version [407], although publication of cluster state version [408] completed [1.5m] ago [2021-07-08T01:13:01,793][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: lagging], term: 1, version: 409, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:13:02,371][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 409, reason: Publication{term=1, version=409} [2021-07-08T01:20:47,404][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-join[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} join existing leader], term: 1, version: 410, delta: added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:20:57,417][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [410] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T01:21:17,416][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] added {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 410, reason: Publication{term=1, version=410} [2021-07-08T01:21:17,425][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [410] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T01:21:27,494][INFO ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [10s] publication of cluster state version [411] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T01:21:47,520][WARN ][o.e.c.c.C.CoordinatorPublication] [dev-sdnrdb-master-0] after [30s] publication of cluster state version [411] is still waiting for {dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} [SENT_PUBLISH_REQUEST] [2021-07-08T01:21:47,525][INFO ][o.e.c.s.MasterService ] [dev-sdnrdb-master-0] node-left[{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr} reason: followers check retry count exceeded], term: 1, version: 412, delta: removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}} [2021-07-08T01:21:47,673][INFO ][o.e.c.s.ClusterApplierService] [dev-sdnrdb-master-0] removed {{dev-sdnrdb-master-2}{0g5RvV_5RUSBSSIAV_l3Eg}{YsIpEMhtRJiGrtHDLdg4cg}{fd00:100:0:0:0:0:0:d661}{[fd00:100::d661]:9300}{dmr}}, term: 1, version: 412, reason: Publication{term=1, version=412}