By type
[38;5;6m [38;5;5m12:35:24.71 [0m [38;5;6m [38;5;5m12:35:24.72 [0m[1mWelcome to the Bitnami elasticsearch container[0m [38;5;6m [38;5;5m12:35:24.81 [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch[0m [38;5;6m [38;5;5m12:35:24.82 [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-elasticsearch/issues[0m [38;5;6m [38;5;5m12:35:24.91 [0m [38;5;6m [38;5;5m12:35:24.92 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch setup ** [38;5;6m [38;5;5m12:35:25.41 [0m[38;5;2mINFO [0m ==> Configuring/Initializing Elasticsearch... [38;5;6m [38;5;5m12:35:25.82 [0m[38;5;2mINFO [0m ==> Setting default configuration [38;5;6m [38;5;5m12:35:26.02 [0m[38;5;2mINFO [0m ==> Configuring Elasticsearch cluster settings... OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [38;5;6m [38;5;5m12:35:45.81 [0m[38;5;2mINFO [0m ==> ** Elasticsearch setup finished! ** [38;5;6m [38;5;5m12:35:46.03 [0m[38;5;2mINFO [0m ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-04-21T12:36:34,210][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.19.0-13-cloud-amd64/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-04-21T12:36:34,214][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] JVM home [/opt/bitnami/java] [2021-04-21T12:36:34,214][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-4518186589263912378, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-04-21T12:36:54,809][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [aggs-matrix-stats] [2021-04-21T12:36:54,811][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [analysis-common] [2021-04-21T12:36:54,812][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [geo] [2021-04-21T12:36:54,812][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [ingest-common] [2021-04-21T12:36:54,813][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [ingest-geoip] [2021-04-21T12:36:54,814][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [ingest-user-agent] [2021-04-21T12:36:54,814][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [kibana] [2021-04-21T12:36:54,815][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [lang-expression] [2021-04-21T12:36:54,816][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [lang-mustache] [2021-04-21T12:36:54,816][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [lang-painless] [2021-04-21T12:36:54,817][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [mapper-extras] [2021-04-21T12:36:54,818][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [parent-join] [2021-04-21T12:36:54,818][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [percolator] [2021-04-21T12:36:54,818][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [rank-eval] [2021-04-21T12:36:54,819][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [reindex] [2021-04-21T12:36:54,820][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [repository-url] [2021-04-21T12:36:54,909][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [tasks] [2021-04-21T12:36:54,910][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded module [transport-netty4] [2021-04-21T12:36:54,912][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-2] loaded plugin [repository-s3] [2021-04-21T12:36:55,613][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-2] using [1] data paths, mounts [[/bitnami/elasticsearch/data (/dev/longhorn/pvc-7ca7ac26-b3e2-4b51-a690-7af275aeccd8)]], net usable_space [7.7gb], net total_space [7.8gb], types [ext4] [2021-04-21T12:36:55,614][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-2] heap size [123.7mb], compressed ordinary object pointers [true] [2021-04-21T12:36:56,219][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] node name [onap-sdnrdb-master-2], node ID [v78U9gF1SGuwMyQkm3VBSg], cluster name [sdnrdb-cluster] [2021-04-21T12:37:48,713][INFO ][o.e.t.NettyAllocator ] [onap-sdnrdb-master-2] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-04-21T12:37:49,918][INFO ][o.e.d.DiscoveryModule ] [onap-sdnrdb-master-2] using discovery type [zen] and seed hosts providers [settings] [2021-04-21T12:37:55,012][WARN ][o.e.g.DanglingIndicesState] [onap-sdnrdb-master-2] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-04-21T12:37:57,416][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] initialized [2021-04-21T12:37:57,417][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] starting ... [2021-04-21T12:37:58,914][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-2] [gc][1] overhead, spent [316ms] collecting in the last [1.2s] [2021-04-21T12:38:00,413][INFO ][o.e.t.TransportService ] [onap-sdnrdb-master-2] publish_address {10.233.72.129:9300}, bound_addresses {0.0.0.0:9300} [2021-04-21T12:38:04,316][INFO ][o.e.b.BootstrapChecks ] [onap-sdnrdb-master-2] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-04-21T12:38:05,521][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-2] [gc][7] overhead, spent [300ms] collecting in the last [1s] [2021-04-21T12:38:08,018][INFO ][o.e.c.c.Coordinator ] [onap-sdnrdb-master-2] setting initial configuration to VotingConfiguration{SuJMxetFTPWgE4TiCGoLHQ,v78U9gF1SGuwMyQkm3VBSg,B1knyd-zRNyxN0xYqo4wSw} [2021-04-21T12:38:10,509][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [], current [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr},{onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}}, term: 2, version: 4, reason: ApplyCommitRequest{term=2, version=4, sourceNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}} [2021-04-21T12:38:11,214][INFO ][o.e.h.AbstractHttpServerTransport] [onap-sdnrdb-master-2] publish_address {10.233.72.129:9200}, bound_addresses {0.0.0.0:9200} [2021-04-21T12:38:11,215][INFO ][o.e.n.Node ] [onap-sdnrdb-master-2] started [2021-04-21T12:38:35,784][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r}}, term: 2, version: 6, reason: ApplyCommitRequest{term=2, version=6, sourceNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}} [2021-04-21T12:43:28,519][INFO ][o.e.c.s.ClusterSettings ] [onap-sdnrdb-master-2] updating [action.auto_create_index] from [true] to [false] [2021-04-21T13:46:49,907][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [14211ms] ago, timed out [4203ms] ago, action [internal:coordination/fault_detection/leader_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [8824] [2021-04-21T14:00:11,596][INFO ][o.e.c.c.Coordinator ] [onap-sdnrdb-master-2] master node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}] failed, restarting discovery org.elasticsearch.ElasticsearchException: node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}] failed [3] consecutive checks at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:293) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1073) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [onap-sdnrdb-master-1][10.233.76.132:9300][internal:coordination/fault_detection/leader_check] request_id [10558] timed out after [10008ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1074) ~[elasticsearch-7.9.3.jar:7.9.3] ... 4 more [2021-04-21T14:00:11,619][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], current []}, term: 2, version: 81, reason: becoming candidate: onLeaderFailure [2021-04-21T14:00:12,419][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 82, delta: master node changed {previous [], current [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}]} [2021-04-21T14:00:22,616][INFO ][o.e.c.c.C.CoordinatorPublication] [onap-sdnrdb-master-2] after [9.8s] publication of cluster state version [82] is still waiting for {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} [SENT_PUBLISH_REQUEST], {onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-21T14:00:42,619][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [], current [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}]}, term: 3, version: 82, reason: Publication{term=3, version=82} [2021-04-21T14:00:42,711][WARN ][o.e.c.c.C.CoordinatorPublication] [onap-sdnrdb-master-2] after [30s] publication of cluster state version [82] is still waiting for {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} [SENT_PUBLISH_REQUEST], {onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r} [SENT_PUBLISH_REQUEST] [2021-04-21T14:00:43,854][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [31101ms] ago, timed out [21289ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r}], id [10585] [2021-04-21T14:00:43,855][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [20289ms] ago, timed out [10282ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r}], id [10620] [2021-04-21T14:00:43,981][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [64420ms] ago, timed out [54405ms] ago, action [internal:coordination/fault_detection/leader_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10532] [2021-04-21T14:00:43,981][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [53404ms] ago, timed out [43396ms] ago, action [internal:coordination/fault_detection/leader_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10549] [2021-04-21T14:00:43,982][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [42395ms] ago, timed out [32387ms] ago, action [internal:coordination/fault_detection/leader_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10558] [2021-04-21T14:00:47,289][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [34512ms] ago, timed out [24700ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10583] [2021-04-21T14:00:47,796][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [13292ms] ago, timed out [3210ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10645] [2021-04-21T14:00:47,797][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [24300ms] ago, timed out [14293ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}], id [10621] [2021-04-21T14:00:48,813][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} reason: followers check retry count exceeded], term: 3, version: 84, delta: removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}} [2021-04-21T14:00:50,988][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}}, term: 3, version: 84, reason: Publication{term=3, version=84} [2021-04-21T14:00:51,315][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [historicalperformance15min-v5][2] primary-replica resync completed with 0 operations [2021-04-21T14:00:51,413][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [56.4s] (36 delayed shards) [2021-04-21T14:00:51,516][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [faultlog-v5][2] primary-replica resync completed with 0 operations [2021-04-21T14:00:51,715][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [guicutthrough-v5][2] primary-replica resync completed with 0 operations [2021-04-21T14:00:51,809][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [eventlog-v5][2] primary-replica resync completed with 0 operations [2021-04-21T14:00:52,211][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} join existing leader], term: 3, version: 85, delta: added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}} [2021-04-21T14:00:53,894][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}}, term: 3, version: 85, reason: Publication{term=3, version=85} [2021-04-21T14:01:27,294][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] failing shard [failed shard, shard [eventlog-v5][1], node[B1knyd-zRNyxN0xYqo4wSw], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=cnvpPjz1SWmOdwRX0a-fvA], unassigned_info[[reason=NODE_LEFT], at[2021-04-21T14:00:47.911Z], delayed=true, details[node_left [B1knyd-zRNyxN0xYqo4wSw]], allocation_status[no_attempt]], message [failed recovery], failure [RecoveryFailedException[[eventlog-v5][1]: Recovery failed from {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}]; nested: RemoteTransportException[[onap-sdnrdb-master-0][10.233.70.176:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[2] failed to send/replay operations]; nested: RemoteTransportException[[onap-sdnrdb-master-1][10.233.76.132:9300][internal:index/shard/recovery/translog_ops]]; nested: UncategorizedExecutionException[Failed execution]; nested: NotSerializableExceptionWrapper[execution_exception: java.io.IOException: Input/output error]; nested: IOException[Input/output error]; ], markAsStale [true]] org.elasticsearch.indices.recovery.RecoveryFailedException: [eventlog-v5][1]: Recovery failed from {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.70.176:9300][internal:index/shard/recovery/start_recovery] Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[2] failed to send/replay operations at org.elasticsearch.indices.recovery.RecoverySourceHandler$OperationBatchSender.handleError(RecoverySourceHandler.java:796) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler$OperationBatchSender.handleError(RecoverySourceHandler.java:721) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.handleItems(MultiChunkTransfer.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.access$000(MultiChunkTransfer.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer$1.write(MultiChunkTransfer.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcessAndRelease(AsyncIOProcessor.java:96) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:84) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.addItem(MultiChunkTransfer.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.lambda$handleItems$4(MultiChunkTransfer.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$3.onFailure(ActionListener.java:118) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onFailure(ActionListener.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onFailure(ActionListener.java:292) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onFinalFailure(RetryableAction.java:174) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onFailure(RetryableAction.java:166) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] ... 6 more Caused by: org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.132:9300][internal:index/shard/recovery/translog_ops] Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:91) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:83) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:111) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:135) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.lambda$performTranslogOps$3(PeerRecoveryTargetService.java:408) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: execution_exception: java.io.IOException: Input/output error at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:273) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:246) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:65) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:76) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:111) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:135) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.lambda$performTranslogOps$3(PeerRecoveryTargetService.java:408) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more Caused by: java.io.IOException: Input/output error at sun.nio.ch.FileDispatcherImpl.force0(Native Method) ~[?:?] at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:82) ~[?:?] at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:461) ~[?:?] at org.elasticsearch.index.translog.TranslogWriter.syncUpTo(TranslogWriter.java:376) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.TranslogWriter.sync(TranslogWriter.java:267) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.Translog.trimUnreferencedReaders(Translog.java:1689) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.engine.InternalEngine.revisitIndexDeletionPolicyOnTranslogSynced(InternalEngine.java:594) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.engine.InternalEngine.syncTranslog(InternalEngine.java:545) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.sync(IndexShard.java:3118) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$indexTranslogOperations$2(RecoveryTarget.java:383) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more [2021-04-21T14:01:28,676][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] failing shard [failed shard, shard [faultcurrent-v5][4], node[B1knyd-zRNyxN0xYqo4wSw], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=NJW_Yd_UTEG3-HD9NBxxYg], unassigned_info[[reason=NODE_LEFT], at[2021-04-21T14:00:47.903Z], delayed=true, details[node_left [B1knyd-zRNyxN0xYqo4wSw]], allocation_status[no_attempt]], message [failed recovery], failure [RecoveryFailedException[[faultcurrent-v5][4]: Recovery failed from {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}]; nested: RemoteTransportException[[onap-sdnrdb-master-0][10.233.70.176:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[2] failed to send/replay operations]; nested: RemoteTransportException[[onap-sdnrdb-master-1][10.233.76.132:9300][internal:index/shard/recovery/translog_ops]]; nested: UncategorizedExecutionException[Failed execution]; nested: NotSerializableExceptionWrapper[execution_exception: java.io.IOException: Input/output error]; nested: IOException[Input/output error]; ], markAsStale [true]] org.elasticsearch.indices.recovery.RecoveryFailedException: [faultcurrent-v5][4]: Recovery failed from {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.onException(PeerRecoveryTargetService.java:653) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryResponseHandler.handleException(PeerRecoveryTargetService.java:587) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:235) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.70.176:9300][internal:index/shard/recovery/start_recovery] Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[2] failed to send/replay operations at org.elasticsearch.indices.recovery.RecoverySourceHandler$OperationBatchSender.handleError(RecoverySourceHandler.java:796) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler$OperationBatchSender.handleError(RecoverySourceHandler.java:721) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.handleItems(MultiChunkTransfer.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.access$000(MultiChunkTransfer.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer$1.write(MultiChunkTransfer.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:108) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcessAndRelease(AsyncIOProcessor.java:96) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:84) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.addItem(MultiChunkTransfer.java:89) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.lambda$handleItems$4(MultiChunkTransfer.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$3.onFailure(ActionListener.java:118) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onFailure(ActionListener.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onFailure(ActionListener.java:292) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onFinalFailure(RetryableAction.java:174) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onFailure(RetryableAction.java:166) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] ... 6 more Caused by: org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.132:9300][internal:index/shard/recovery/translog_ops] Caused by: org.elasticsearch.common.util.concurrent.UncategorizedExecutionException: Failed execution at org.elasticsearch.common.util.concurrent.FutureUtils.rethrowExecutionException(FutureUtils.java:91) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:83) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:111) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:135) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.lambda$performTranslogOps$3(PeerRecoveryTargetService.java:408) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: execution_exception: java.io.IOException: Input/output error at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.getValue(BaseFuture.java:273) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:246) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:65) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:76) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:111) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.setException(BaseFuture.java:162) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onFailure(ListenableFuture.java:135) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.lambda$performTranslogOps$3(PeerRecoveryTargetService.java:408) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more Caused by: java.io.IOException: Input/output error at sun.nio.ch.FileDispatcherImpl.force0(Native Method) ~[?:?] at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:82) ~[?:?] at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:461) ~[?:?] at org.elasticsearch.index.translog.Checkpoint.write(Checkpoint.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.TranslogWriter.writeCheckpoint(TranslogWriter.java:420) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.TranslogWriter.syncUpTo(TranslogWriter.java:377) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.TranslogWriter.sync(TranslogWriter.java:267) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.translog.Translog.trimUnreferencedReaders(Translog.java:1689) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.engine.InternalEngine.revisitIndexDeletionPolicyOnTranslogSynced(InternalEngine.java:594) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.engine.InternalEngine.syncTranslog(InternalEngine.java:545) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.sync(IndexShard.java:3118) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.lambda$indexTranslogOperations$2(RecoveryTarget.java:383) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoveryTarget.indexTranslogOperations(RecoveryTarget.java:345) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.performTranslogOps(PeerRecoveryTargetService.java:393) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:352) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$TranslogOperationsRequestHandler.messageReceived(PeerRecoveryTargetService.java:339) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] ... 3 more [2021-04-21T14:01:29,912][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [eventlog-v5][1] marking unavailable shards as stale: [9f9CdlrUSw6lawa8H15WCQ] [2021-04-21T14:01:30,350][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultcurrent-v5][4] marking unavailable shards as stale: [0X8dq0llSTGFwcH5ANuMfQ] [2021-04-21T14:01:45,798][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr} reason: disconnected], term: 3, version: 130, delta: removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}} [2021-04-21T14:01:46,021][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{mei-ZlPiRU2BlpUpQA6Wqw}{10.233.76.132}{10.233.76.132:9300}{dmr}}, term: 3, version: 130, reason: Publication{term=3, version=130} [2021-04-21T14:01:48,129][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [57.6s] (24 delayed shards) [2021-04-21T14:01:48,210][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance24h-v5][3] marking unavailable shards as stale: [b66gh9eZTGSKvEL_oxDcnw] [2021-04-21T14:01:48,490][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance24h-v5][0] marking unavailable shards as stale: [w8ccVdnEToa2VhZtuBRgQQ] [2021-04-21T14:01:49,013][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance24h-v5][1] marking unavailable shards as stale: [D2zJPNg8QKG4bCKacbbYkg] [2021-04-21T14:01:49,014][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance24h-v5][4] marking unavailable shards as stale: [fnwXJ6_IQ8aCNPm5lZB7Rw] [2021-04-21T14:01:51,937][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [inventoryequipment-v5][1] marking unavailable shards as stale: [wn1pLAdWTCSUDezVXeLbYw] [2021-04-21T14:01:52,327][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [guicutthrough-v5][0] marking unavailable shards as stale: [byEZTJ-gRcOkCfh8Z_vksA] [2021-04-21T14:01:52,327][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [inventoryequipment-v5][4] marking unavailable shards as stale: [m03vju6CSXG0eNM_QOflYw] [2021-04-21T14:01:52,328][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [inventoryequipment-v5][2] marking unavailable shards as stale: [x-DJ_Ws7SK6brfS6hMCqFw] [2021-04-21T14:01:54,945][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [guicutthrough-v5][1] marking unavailable shards as stale: [3sJv7hY4R3eZNq6RPRNcvA] [2021-04-21T14:01:55,523][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [guicutthrough-v5][2] marking unavailable shards as stale: [v8Mys0fLQsy4vYbOI8Q55A] [2021-04-21T14:02:46,226][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][1] marking unavailable shards as stale: [6rUZt9E-Q-6fSX-FpQQkYg] [2021-04-21T14:02:46,667][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance15min-v5][0] marking unavailable shards as stale: [8IxAYR3kRcS9sDyRzAbodg] [2021-04-21T14:02:46,669][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][2] marking unavailable shards as stale: [3VGoq2IWTcaSTFXyLEDPkA] [2021-04-21T14:02:46,709][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][4] marking unavailable shards as stale: [-8tRQwrjTC66l4GgKcFCuA] [2021-04-21T14:02:50,310][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [maintenancemode-v5][0] marking unavailable shards as stale: [wAGZssdBT5uT6rmrwCXM1Q] [2021-04-21T14:02:50,521][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [maintenancemode-v5][1] marking unavailable shards as stale: [FuUa9Q30TzSGGTMVo1ugDw] [2021-04-21T14:02:50,522][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance15min-v5][1] marking unavailable shards as stale: [zNiCtxIxRfyWseTww7Ij6A] [2021-04-21T14:02:50,918][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance15min-v5][2] marking unavailable shards as stale: [vkDwG9_pQtq8dBSS2MDoNg] [2021-04-21T14:02:53,229][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [mediator-server-v5][2] marking unavailable shards as stale: [hy7cRZrkRXqt3hcGbgkC5A] [2021-04-21T14:02:53,721][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [mediator-server-v5][1] marking unavailable shards as stale: [lEyWkh5rR2GQ-DcEx9c8QA] [2021-04-21T14:02:53,721][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [maintenancemode-v5][3] marking unavailable shards as stale: [glnvjb0eRV6rWJLxPnTfYg] [2021-04-21T14:02:53,722][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [maintenancemode-v5][4] marking unavailable shards as stale: [ULKHjETNShymelGvlRMTLA] [2021-04-21T14:02:56,352][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultlog-v5][1] marking unavailable shards as stale: [4pcT1N75T3OZrCRB0OMGnw] [2021-04-21T14:02:56,502][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [mediator-server-v5][4] marking unavailable shards as stale: [RMW1jm79Q26qs9lH52zc1g] [2021-04-21T14:02:57,415][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultlog-v5][0] marking unavailable shards as stale: [T5ZWfQCjTG2Gsa3S9Kqr1g] [2021-04-21T14:02:57,416][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultlog-v5][2] marking unavailable shards as stale: [zQWFKPOUQw29zmHQS-O1cQ] [2021-04-21T14:02:59,509][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [connectionlog-v5][3] marking unavailable shards as stale: [Ms3ubSvbQBeA9eqF_iEabg] [2021-04-21T14:02:59,709][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [connectionlog-v5][0] marking unavailable shards as stale: [i_orSniHQFajaXk70aMSHQ] [2021-04-21T14:03:00,509][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [connectionlog-v5][1] marking unavailable shards as stale: [HhcBY6DeTdG-_MaiGC3xnA] [2021-04-21T14:03:00,510][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [connectionlog-v5][4] marking unavailable shards as stale: [HawlZYXQSBG6vU_CMJnpgg] [2021-04-21T14:03:02,709][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultcurrent-v5][1] marking unavailable shards as stale: [6iz2XN8TSXGFceHDmwg31g] [2021-04-21T14:03:03,411][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [faultcurrent-v5][2] marking unavailable shards as stale: [n-o7iyw5SYij3QfhSHyOlw] [2021-04-21T14:03:04,112][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [eventlog-v5][0] marking unavailable shards as stale: [hPMokVsdSbebM2aP0b3svw] [2021-04-21T14:03:06,326][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [eventlog-v5][2] marking unavailable shards as stale: [1ksmPMJpST2TPBZhRB381g] [2021-04-21T14:03:07,086][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[eventlog-v5][2]]]). [2021-04-21T14:07:19,286][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} join existing leader], term: 3, version: 188, delta: added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}} [2021-04-21T14:07:24,583][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}, term: 3, version: 188, reason: Publication{term=3, version=188} [2021-04-21T14:18:59,222][WARN ][o.e.c.InternalClusterInfoService] [onap-sdnrdb-master-2] Failed to update node information for ClusterInfoUpdateJob within 15s timeout [2021-04-21T14:19:14,224][WARN ][o.e.c.InternalClusterInfoService] [onap-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-21T14:19:16,811][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} reason: followers check retry count exceeded], term: 3, version: 251, delta: removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}} [2021-04-21T14:19:26,924][INFO ][o.e.c.c.C.CoordinatorPublication] [onap-sdnrdb-master-2] after [10s] publication of cluster state version [251] is still waiting for {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} [SENT_PUBLISH_REQUEST], {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} [WAITING_FOR_QUORUM], {onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r} [WAITING_FOR_QUORUM] [2021-04-21T14:19:29,409][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}, term: 3, version: 251, reason: Publication{term=3, version=251} [2021-04-21T14:19:29,811][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [mediator-server-v5][4] primary-replica resync completed with 0 operations [2021-04-21T14:19:29,815][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [faultcurrent-v5][4] primary-replica resync completed with 0 operations [2021-04-21T14:19:29,835][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [maintenancemode-v5][3] primary-replica resync completed with 0 operations [2021-04-21T14:19:29,924][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [guicutthrough-v5][4] primary-replica resync completed with 0 operations [2021-04-21T14:19:30,026][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [inventoryequipment-v5][3] primary-replica resync completed with 0 operations [2021-04-21T14:19:30,137][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [connectionlog-v5][2] primary-replica resync completed with 0 operations [2021-04-21T14:19:30,156][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [46.4s] (36 delayed shards) [2021-04-21T14:19:30,411][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [historicalperformance24h-v5][3] primary-replica resync completed with 0 operations [2021-04-21T14:19:30,509][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [networkelement-connection-v5][4] primary-replica resync completed with 0 operations [2021-04-21T14:19:30,620][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [eventlog-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:19:45,579][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} join existing leader], term: 3, version: 252, delta: added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}} [2021-04-21T14:19:47,187][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}, term: 3, version: 252, reason: Publication{term=3, version=252} [2021-04-21T14:20:13,986][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[guicutthrough-v5][4]]]). [2021-04-21T14:22:06,015][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr} reason: health check failed], term: 3, version: 310, delta: removed {{onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}} [2021-04-21T14:22:16,219][INFO ][o.e.c.c.C.CoordinatorPublication] [onap-sdnrdb-master-2] after [9.8s] publication of cluster state version [310] is still waiting for {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} [SENT_APPLY_COMMIT] [2021-04-21T14:22:17,182][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}}, term: 3, version: 310, reason: Publication{term=3, version=310} [2021-04-21T14:22:17,276][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [faultcurrent-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,335][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [faultcurrent-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,410][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [historicalperformance24h-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,447][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [faultlog-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,523][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [mediator-server-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,613][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [mediator-server-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,623][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [networkelement-connection-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,722][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [networkelement-connection-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,737][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [connectionlog-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,810][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [maintenancemode-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:22:17,813][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [48.1s] (37 delayed shards) [2021-04-21T14:22:17,910][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [guicutthrough-v5][1] primary-replica resync completed with 0 operations [2021-04-21T14:22:18,009][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-2] [inventoryequipment-v5][0] primary-replica resync completed with 0 operations [2021-04-21T14:23:06,783][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [historicalperformance15min-v5][1] marking unavailable shards as stale: [_-NcOWxxSDC42L21pnu-Kw] [2021-04-21T14:23:07,235][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][0] marking unavailable shards as stale: [Bf1jcTPaR4effDuSXuG_Sg] [2021-04-21T14:23:07,235][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][1] marking unavailable shards as stale: [IenshI5uQKGBn0oyWbwaFg] [2021-04-21T14:23:07,236][WARN ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] [networkelement-connection-v5][3] marking unavailable shards as stale: [T7yQ9aHzTCqDYmM_3jSJfg] [2021-04-21T14:23:21,205][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [10815ms] ago, timed out [800ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}], id [18841] [2021-04-21T14:23:42,414][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} reason: followers check retry count exceeded], term: 3, version: 317, delta: removed {{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}} [2021-04-21T14:23:42,519][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}], current []}, term: 3, version: 316, reason: becoming candidate: Publication.onCompletion(false) [2021-04-21T14:23:42,522][WARN ][o.e.c.s.MasterService ] [onap-sdnrdb-master-2] failing [node-left[{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} reason: followers check retry count exceeded]]: failed to commit cluster state version [317] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-04-21T14:23:42,610][ERROR][o.e.c.c.Coordinator ] [onap-sdnrdb-master-2] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 14 more [2021-04-21T14:23:44,910][WARN ][o.e.c.InternalClusterInfoService] [onap-sdnrdb-master-2] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout [2021-04-21T14:23:52,516][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [SuJMxetFTPWgE4TiCGoLHQ, v78U9gF1SGuwMyQkm3VBSg, B1knyd-zRNyxN0xYqo4wSw], have discovered [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}] which is a quorum; discovery will continue using [10.233.76.35:9300, 10.233.76.163:9300, 10.233.70.176:9300] from hosts providers and [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}, {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}] from last-known cluster state; node term 3, last-accepted version 316 in term 3 [2021-04-21T14:24:02,518][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [SuJMxetFTPWgE4TiCGoLHQ, v78U9gF1SGuwMyQkm3VBSg, B1knyd-zRNyxN0xYqo4wSw], have discovered [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}] which is a quorum; discovery will continue using [10.233.76.35:9300, 10.233.76.163:9300, 10.233.70.176:9300] from hosts providers and [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}, {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}] from last-known cluster state; node term 3, last-accepted version 316 in term 3 [2021-04-21T14:24:07,710][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [45502ms] ago, timed out [35494ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r}], id [18883] [2021-04-21T14:24:08,146][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [34894ms] ago, timed out [24820ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-coordinating-only-56dfdc4d57-c7d46}{62gydvkgT2yLG6wcHf2lQw}{2QwAz_8KRt2l4Q833BZX0A}{10.233.76.35}{10.233.76.35:9300}{r}], id [18913] [2021-04-21T14:24:11,467][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [56716ms] ago, timed out [41698ms] ago, action [cluster:monitor/nodes/stats[n]], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}], id [18858] [2021-04-21T14:24:12,093][ERROR][o.e.c.a.s.ShardStateAction] [onap-sdnrdb-master-2] [maintenancemode-v5][2] no longer master while failing shard [shard id [[maintenancemode-v5][2]], allocation id [neQ9b5WOR52g1Qs3lJGB4A], primary term [2], message [mark copy as stale], markAsStale [true]] [2021-04-21T14:24:12,094][ERROR][o.e.c.a.s.ShardStateAction] [onap-sdnrdb-master-2] [historicalperformance15min-v5][4] no longer master while failing shard [shard id [[historicalperformance15min-v5][4]], allocation id [Lfr_w3BPSt2igCz9nmnScA], primary term [2], message [mark copy as stale], markAsStale [true]] [2021-04-21T14:24:12,519][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [SuJMxetFTPWgE4TiCGoLHQ, v78U9gF1SGuwMyQkm3VBSg, B1knyd-zRNyxN0xYqo4wSw], have discovered [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}] which is a quorum; discovery will continue using [10.233.76.35:9300, 10.233.76.163:9300, 10.233.70.176:9300] from hosts providers and [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}, {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}] from last-known cluster state; node term 3, last-accepted version 316 in term 3 [2021-04-21T14:24:12,589][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [51107ms] ago, timed out [41098ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}], id [18882] [2021-04-21T14:24:12,590][WARN ][o.e.t.TransportService ] [onap-sdnrdb-master-2] Received response for a request that has timed out, sent [40097ms] ago, timed out [30089ms] ago, action [internal:coordination/fault_detection/follower_check], node [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}], id [18912] [2021-04-21T14:24:13,580][WARN ][o.e.i.s.RetentionLeaseSyncAction] [onap-sdnrdb-master-2] [historicalperformance15min-v5][0] retention lease sync failed org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][indices:admin/seq_no/retention_lease_sync[p]] Caused by: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master]; at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.blockExceptions(TransportReplicationAction.java:257) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.access$100(TransportReplicationAction.java:95) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:366) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:351) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$24(IndexShard.java:2823) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$3.onResponse(ActionListener.java:113) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:285) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:237) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2797) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:909) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:347) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.handlePrimaryRequest(TransportReplicationAction.java:303) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:794) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.access$100(TransportService.java:76) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:130) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:738) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:652) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:703) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:689) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.RetentionLeaseSyncAction.sync(RetentionLeaseSyncAction.java:111) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.RetentionLeaseSyncer.sync(RetentionLeaseSyncer.java:49) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.lambda$new$0(IndexShard.java:352) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.ReplicationTracker.cloneRetentionLease(ReplicationTracker.java:353) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.ReplicationTracker.cloneLocalPeerRecoveryRetentionLease(ReplicationTracker.java:518) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.cloneLocalPeerRecoveryRetentionLease(IndexShard.java:2686) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$createRetentionLease$29(RecoverySourceHandler.java:598) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.createRetentionLease(RecoverySourceHandler.java:586) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$phase1$23(RecoverySourceHandler.java:543) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) [elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) [?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.onCompleted(MultiChunkTransfer.java:148) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.handleItems(MultiChunkTransfer.java:118) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.access$000(MultiChunkTransfer.java:59) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer$1.write(MultiChunkTransfer.java:78) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:108) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcessAndRelease(AsyncIOProcessor.java:96) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:84) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.addItem(MultiChunkTransfer.java:89) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.lambda$handleItems$3(MultiChunkTransfer.java:124) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:13,580][WARN ][o.e.i.s.RetentionLeaseSyncAction] [onap-sdnrdb-master-2] [maintenancemode-v5][0] retention lease sync failed org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][indices:admin/seq_no/retention_lease_sync[p]] Caused by: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master]; at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.blockExceptions(TransportReplicationAction.java:257) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.access$100(TransportReplicationAction.java:95) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:366) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:351) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$24(IndexShard.java:2823) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$3.onResponse(ActionListener.java:113) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:285) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:237) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2797) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:909) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:347) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.replication.TransportReplicationAction.handlePrimaryRequest(TransportReplicationAction.java:303) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:794) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.access$100(TransportService.java:76) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:130) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:738) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:652) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:703) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:689) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.RetentionLeaseSyncAction.sync(RetentionLeaseSyncAction.java:111) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.RetentionLeaseSyncer.sync(RetentionLeaseSyncer.java:49) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.lambda$new$0(IndexShard.java:352) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.ReplicationTracker.cloneRetentionLease(ReplicationTracker.java:353) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.seqno.ReplicationTracker.cloneLocalPeerRecoveryRetentionLease(ReplicationTracker.java:518) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.index.shard.IndexShard.cloneLocalPeerRecoveryRetentionLease(IndexShard.java:2686) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$createRetentionLease$29(RecoverySourceHandler.java:598) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$runUnderPrimaryPermit$19(RecoverySourceHandler.java:385) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:108) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:89) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.runUnderPrimaryPermit(RecoverySourceHandler.java:363) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.createRetentionLease(RecoverySourceHandler.java:586) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$phase1$23(RecoverySourceHandler.java:543) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98) [elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) [?:?] at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:62) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:40) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.onCompleted(MultiChunkTransfer.java:148) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.handleItems(MultiChunkTransfer.java:118) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.access$000(MultiChunkTransfer.java:59) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer$1.write(MultiChunkTransfer.java:78) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:108) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcessAndRelease(AsyncIOProcessor.java:96) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:84) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.addItem(MultiChunkTransfer.java:89) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.recovery.MultiChunkTransfer.lambda$handleItems$3(MultiChunkTransfer.java:124) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.RetryableAction$RetryingListener.onResponse(RetryableAction.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:54) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1162) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$1.doRun(InboundHandler.java:213) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:13,984][ERROR][o.e.c.a.s.ShardStateAction] [onap-sdnrdb-master-2] [maintenancemode-v5][0] no longer master while failing shard [shard id [[maintenancemode-v5][0]], allocation id [OQ0mI70NSaWSXAPW1E81gg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[maintenancemode-v5][0]: Recovery failed from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}]; nested: RemoteTransportException[[onap-sdnrdb-master-2][10.233.72.129:9300][internal:index/shard/recovery/start_recovery]]; nested: RemoteTransportException[[onap-sdnrdb-master-2][10.233.72.129:9300][indices:admin/seq_no/retention_lease_sync[p]]]; nested: ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/2/no master];]; ], markAsStale [true]] [2021-04-21T14:24:14,189][ERROR][o.e.c.a.s.ShardStateAction] [onap-sdnrdb-master-2] [historicalperformance15min-v5][0] no longer master while failing shard [shard id [[historicalperformance15min-v5][0]], allocation id [-26HJJl6QeynEXX6PnyWGg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[historicalperformance15min-v5][0]: Recovery failed from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} into {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}]; nested: RemoteTransportException[[onap-sdnrdb-master-2][10.233.72.129:9300][internal:index/shard/recovery/start_recovery]]; nested: RemoteTransportException[[onap-sdnrdb-master-2][10.233.72.129:9300][indices:admin/seq_no/retention_lease_sync[p]]]; nested: ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/2/no master];]; ], markAsStale [true]] [2021-04-21T14:24:16,084][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=4, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter.
(IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,094][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=4, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,218][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,223][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,673][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,678][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:16,764][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:17,476][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=8, optionalJoin=Optional[Join{term=9, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:17,481][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=8, optionalJoin=Optional[Join{term=9, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:18,142][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=9, optionalJoin=Optional[Join{term=10, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:18,144][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=9, optionalJoin=Optional[Join{term=10, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:18,835][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=10, optionalJoin=Optional[Join{term=11, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:18,912][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=10, optionalJoin=Optional[Join{term=11, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:19,298][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=11, optionalJoin=Optional[Join{term=12, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:19,301][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=11, optionalJoin=Optional[Join{term=12, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:19,324][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=7, optionalJoin=Optional[Join{term=8, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:19,929][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=13, optionalJoin=Optional[Join{term=14, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:19,932][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=13, optionalJoin=Optional[Join{term=14, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:21,190][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=14, optionalJoin=Optional[Join{term=15, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:21,193][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=14, optionalJoin=Optional[Join{term=15, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:22,000][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=15, optionalJoin=Optional[Join{term=16, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:22,003][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=15, optionalJoin=Optional[Join{term=16, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:22,231][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=12, optionalJoin=Optional[Join{term=13, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:22,522][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] last failed join attempt was 211ms ago, failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=12, optionalJoin=Optional[Join{term=13, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:22,525][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [SuJMxetFTPWgE4TiCGoLHQ, v78U9gF1SGuwMyQkm3VBSg, B1knyd-zRNyxN0xYqo4wSw], have discovered [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}, {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}] which is a quorum; discovery will continue using [10.233.76.35:9300, 10.233.76.163:9300, 10.233.70.176:9300] from hosts providers and [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}, {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}] from last-known cluster state; node term 17, last-accepted version 316 in term 3 [2021-04-21T14:24:23,277][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=17, optionalJoin=Optional[Join{term=18, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:23,280][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=17, optionalJoin=Optional[Join{term=18, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:24,179][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=18, optionalJoin=Optional[Join{term=19, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:24,184][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=18, optionalJoin=Optional[Join{term=19, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:25,209][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=16, optionalJoin=Optional[Join{term=17, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:25,512][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=20, optionalJoin=Optional[Join{term=21, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:25,517][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=20, optionalJoin=Optional[Join{term=21, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:26,943][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=19, optionalJoin=Optional[Join{term=20, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:26,976][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=22, optionalJoin=Optional[Join{term=23, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:26,979][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=22, optionalJoin=Optional[Join{term=23, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:28,090][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=21, optionalJoin=Optional[Join{term=22, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:28,473][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=24, optionalJoin=Optional[Join{term=25, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:28,478][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=24, optionalJoin=Optional[Join{term=25, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:29,680][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=23, optionalJoin=Optional[Join{term=24, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:30,440][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=26, optionalJoin=Optional[Join{term=27, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:30,443][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=26, optionalJoin=Optional[Join{term=27, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:32,167][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=25, optionalJoin=Optional[Join{term=26, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:32,408][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=28, optionalJoin=Optional[Join{term=29, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:32,411][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=28, optionalJoin=Optional[Join{term=29, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:32,526][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] last failed join attempt was 114ms ago, failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=28, optionalJoin=Optional[Join{term=29, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:32,530][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-2] master not discovered or elected yet, an election requires at least 2 nodes with ids from [SuJMxetFTPWgE4TiCGoLHQ, v78U9gF1SGuwMyQkm3VBSg, B1knyd-zRNyxN0xYqo4wSw], have discovered [{onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, {onap-sdnrdb-master-0}{SuJMxetFTPWgE4TiCGoLHQ}{2VLHryX5TxeHVE8PLf15WA}{10.233.70.176}{10.233.70.176:9300}{dmr}, {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}] which is a quorum; discovery will continue using [10.233.76.35:9300, 10.233.76.163:9300, 10.233.70.176:9300] from hosts providers and [{onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}, {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}] from last-known cluster state; node term 29, last-accepted version 316 in term 3 [2021-04-21T14:24:34,579][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=29, optionalJoin=Optional[Join{term=30, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:34,581][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=29, optionalJoin=Optional[Join{term=30, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:35,001][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=27, optionalJoin=Optional[Join{term=28, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:36,995][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=31, optionalJoin=Optional[Join{term=32, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:36,997][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=31, optionalJoin=Optional[Join{term=32, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:37,506][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=30, optionalJoin=Optional[Join{term=31, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:37,768][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=33, optionalJoin=Optional[Join{term=34, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:37,771][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=33, optionalJoin=Optional[Join{term=34, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:39,038][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=34, optionalJoin=Optional[Join{term=35, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:39,041][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=34, optionalJoin=Optional[Join{term=35, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:39,552][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=35, optionalJoin=Optional[Join{term=36, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:39,555][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=35, optionalJoin=Optional[Join{term=36, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:39,904][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=32, optionalJoin=Optional[Join{term=33, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.72.129:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-04-21T14:24:40,241][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=37, optionalJoin=Optional[Join{term=38, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:40,243][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=37, optionalJoin=Optional[Join{term=38, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?] at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?] at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.apache.lucene.index.IndexWriter. (IndexWriter.java:785) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera - 2020-08-26 10:53:36] at org.elasticsearch.gateway.PersistedClusterStateService.createIndexWriter(PersistedClusterStateService.java:204) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.PersistedClusterStateService.createWriter(PersistedClusterStateService.java:180) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:562) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:999) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] [2021-04-21T14:24:42,170][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, minimumTerm=38, optionalJoin=Optional[Join{term=39, lastAcceptedTerm=3, lastAcceptedVersion=316, sourceNode={onap-sdnrdb-master-2}{v78U9gF1SGuwMyQkm3VBSg}{6oqc66MjRxm_NnT16Vu_Cw}{10.233.72.129}{10.233.72.129:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{B1knyd-zRNyxN0xYqo4wSw}{Fm0Nz1IkQGWhuKo2-KIOog}{10.233.76.163}{10.233.76.163:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.76.163:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.ElasticsearchException: java.nio.file.FileSystemException: /bitnami/elasticsearch/data/nodes/0/_state/write.lock: Read-only file system at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:59) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.getWriterSafe(GatewayMetaState.java:571) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setCurrentTerm(GatewayMetaState.java:518) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.CoordinationState.handleStartJoin(CoordinationState.java:198) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:458) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.ensureTermAtLeast(Coordinator.java:450) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handle