22:54:20.38   22:54:20.46 Welcome to the Bitnami elasticsearch container  22:54:20.47 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch  22:54:20.47 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues  22:54:20.48   22:54:20.56 INFO  ==> ** Starting Elasticsearch setup **  22:54:20.87 INFO  ==> Configuring/Initializing Elasticsearch...  22:54:21.26 INFO  ==> Setting default configuration  22:54:21.37 INFO  ==> Configuring Elasticsearch cluster settings... OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.  22:54:41.46 INFO  ==> ** Elasticsearch setup finished! **  22:54:41.66 INFO  ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2022-11-06T22:55:21,062][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/5.4.0-96-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2022-11-06T22:55:21,066][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] JVM home [/opt/bitnami/java] [2022-11-06T22:55:21,066][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-2521184415102390258, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2022-11-06T22:55:38,866][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [aggs-matrix-stats] [2022-11-06T22:55:38,866][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [analysis-common] [2022-11-06T22:55:38,867][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [geo] [2022-11-06T22:55:38,867][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-common] [2022-11-06T22:55:38,868][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-geoip] [2022-11-06T22:55:38,868][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-user-agent] [2022-11-06T22:55:38,869][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [kibana] [2022-11-06T22:55:38,869][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-expression] [2022-11-06T22:55:38,869][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-mustache] [2022-11-06T22:55:38,870][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-painless] [2022-11-06T22:55:38,870][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [mapper-extras] [2022-11-06T22:55:38,871][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [parent-join] [2022-11-06T22:55:38,871][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [percolator] [2022-11-06T22:55:38,872][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [rank-eval] [2022-11-06T22:55:38,872][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [reindex] [2022-11-06T22:55:38,961][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [repository-url] [2022-11-06T22:55:38,961][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [tasks] [2022-11-06T22:55:38,962][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [transport-netty4] [2022-11-06T22:55:38,964][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded plugin [repository-s3] [2022-11-06T22:55:39,663][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (192.168.13.252:/dockerdata-nfs/onap/elastic-master-2)]], net usable_space [95.3gb], net total_space [99.9gb], types [nfs4] [2022-11-06T22:55:39,664][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true] [2022-11-06T22:55:40,366][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] node name [onap-sdnrdb-master-0], node ID [puZNMpXyRLegCRgVCl88YQ], cluster name [sdnrdb-cluster] [2022-11-06T22:56:30,770][INFO ][o.e.t.NettyAllocator ] [onap-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2022-11-06T22:56:31,675][INFO ][o.e.d.DiscoveryModule ] [onap-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings] [2022-11-06T22:56:35,479][WARN ][o.e.g.DanglingIndicesState] [onap-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2022-11-06T22:56:38,072][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] initialized [2022-11-06T22:56:38,073][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] starting ... [2022-11-06T22:56:39,764][INFO ][o.e.t.TransportService ] [onap-sdnrdb-master-0] publish_address {10.233.69.148:9300}, bound_addresses {0.0.0.0:9300} [2022-11-06T22:56:41,167][WARN ][o.e.t.TcpTransport ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.69.148:9300, remoteAddress=/127.0.0.6:46973}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-06T22:56:42,361][WARN ][o.e.t.TcpTransport ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.69.148:9300, remoteAddress=/127.0.0.6:38069}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-06T22:56:43,069][INFO ][o.e.b.BootstrapChecks ] [onap-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks [2022-11-06T22:56:53,268][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]; discovery will continue using [10.233.66.236:9300, 10.233.66.15:9300, 10.233.71.126:9300] from hosts providers and [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2022-11-06T22:57:03,271][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]; discovery will continue using [10.233.66.236:9300, 10.233.66.15:9300, 10.233.71.126:9300] from hosts providers and [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2022-11-06T22:57:13,274][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]; discovery will continue using [10.233.66.236:9300, 10.233.66.15:9300, 10.233.71.126:9300] from hosts providers and [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2022-11-06T22:57:23,277][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]; discovery will continue using [10.233.66.236:9300, 10.233.66.15:9300, 10.233.71.126:9300] from hosts providers and [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2022-11-06T22:57:33,281][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]; discovery will continue using [10.233.66.236:9300, 10.233.66.15:9300, 10.233.71.126:9300] from hosts providers and [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 [2022-11-06T22:57:40,452][INFO ][o.e.c.c.Coordinator ] [onap-sdnrdb-master-0] setting initial configuration to VotingConfiguration{{bootstrap-placeholder}-onap-sdnrdb-master-2,puZNMpXyRLegCRgVCl88YQ,5IjPjXA-SU2jihsiJcxfHg} [2022-11-06T22:57:41,575][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr} elect leader, {onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}} [2022-11-06T22:57:42,969][INFO ][o.e.c.c.CoordinationState] [onap-sdnrdb-master-0] cluster UUID set to [npzf0GFXQUq1miVs_oridA] [2022-11-06T22:57:43,568][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}}, term: 1, version: 1, reason: Publication{term=1, version=1} [2022-11-06T22:57:43,962][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r} join existing leader, {onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} join existing leader, {onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr} join existing leader], term: 1, version: 2, delta: added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r},{onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}} [2022-11-06T22:57:43,964][INFO ][o.e.h.AbstractHttpServerTransport] [onap-sdnrdb-master-0] publish_address {10.233.69.148:9200}, bound_addresses {0.0.0.0:9200} [2022-11-06T22:57:43,965][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] started [2022-11-06T22:57:44,894][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r},{onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}}, term: 1, version: 2, reason: Publication{term=1, version=2} [2022-11-06T22:57:46,268][INFO ][o.e.g.GatewayService ] [onap-sdnrdb-master-0] recovered [0] indices into cluster_state [2022-11-06T22:57:58,701][INFO ][o.e.c.s.ClusterSettings ] [onap-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false] [2022-11-06T22:58:00,965][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [mediator-server-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:11,363][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [connectionlog-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:17,061][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultcurrent-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:22,566][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [inventoryequipment-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:28,465][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [maintenancemode-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:33,566][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [userdata-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:40,566][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance24h-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:44,464][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance15min-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:49,573][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [eventlog-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:54,563][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [cmlog-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:58:58,671][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [guicutthrough-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:59:02,964][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [networkelement-connection-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:59:06,562][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultlog-v7] creating index, cause [api], templates [], shards [5]/[1] [2022-11-06T22:59:12,223][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v7][2], [faultlog-v7][4]]]). [2022-11-06T23:57:39,864][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} reason: disconnected], term: 1, version: 88, delta: removed {{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}} [2022-11-06T23:57:40,166][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultlog-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,261][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultlog-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,264][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [networkelement-connection-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,265][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [guicutthrough-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,267][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [cmlog-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,361][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [eventlog-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,364][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [eventlog-v7][4]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,369][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [historicalperformance15min-v7][4]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,367][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [historicalperformance15min-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,464][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [historicalperformance24h-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,466][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [userdata-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,561][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [userdata-v7][4]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,565][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [userdata-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,761][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [inventoryequipment-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,764][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [inventoryequipment-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,861][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [inventoryequipment-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,865][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultcurrent-v7][3]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,865][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultcurrent-v7][4]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,868][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultcurrent-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,965][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultcurrent-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,968][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [connectionlog-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:40,967][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [connectionlog-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:41,061][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [connectionlog-v7][4]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:41,061][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [mediator-server-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:41,063][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [mediator-server-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:41,065][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [mediator-server-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$6.handleException(TransportService.java:638) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/nodes/indices/shard/store[n]] disconnected [2022-11-06T23:57:41,361][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}], current []}, term: 1, version: 87, reason: becoming candidate: Publication.onCompletion(false) [2022-11-06T23:57:41,363][WARN ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] failing [node-left[{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} reason: disconnected]]: failed to commit cluster state version [88] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication$PublicationTarget$PublishResponseHandler.onFailure(Publication.java:368) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$5.onFailure(Coordinator.java:1151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.PublicationTransportHandler$PublicationContext.lambda$sendClusterState$2(PublicationTransportHandler.java:412) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.PublicationTransportHandler$PublicationContext$3.handleException(PublicationTransportHandler.java:430) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 11 more [2022-11-06T23:57:41,366][ERROR][o.e.c.c.Coordinator ] [onap-sdnrdb-master-0] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication$PublicationTarget$PublishResponseHandler.onFailure(Publication.java:368) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$5.onFailure(Coordinator.java:1151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.PublicationTransportHandler$PublicationContext.lambda$sendClusterState$2(PublicationTransportHandler.java:412) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.PublicationTransportHandler$PublicationContext$3.handleException(PublicationTransportHandler.java:430) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1172) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$9.run(TransportService.java:1034) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 11 more [2022-11-06T23:57:42,086][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr} elect leader, {onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 89, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}} [2022-11-06T23:57:43,499][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]}, term: 2, version: 89, reason: Publication{term=2, version=89} [2022-11-06T23:57:44,363][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [eventlog-v7][3] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,366][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [eventlog-v7][0] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,462][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [connectionlog-v7][1] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,463][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [connectionlog-v7][4] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,561][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [networkelement-connection-v7][3] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,666][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [networkelement-connection-v7][0] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,762][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [faultcurrent-v7][3] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,861][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [faultcurrent-v7][0] primary-replica resync completed with 0 operations [2022-11-06T23:57:44,965][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [historicalperformance15min-v7][1] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,166][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [historicalperformance15min-v7][4] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,365][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [guicutthrough-v7][1] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,463][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [guicutthrough-v7][4] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,561][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [userdata-v7][3] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,667][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [userdata-v7][0] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,762][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [53.8s] (43 delayed shards) [2022-11-06T23:57:45,862][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [maintenancemode-v7][1] primary-replica resync completed with 0 operations [2022-11-06T23:57:45,961][INFO ][o.e.i.s.IndexShard ] [onap-sdnrdb-master-0] [maintenancemode-v7][4] primary-replica resync completed with 0 operations [2022-11-06T23:57:46,064][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r} reason: disconnected], term: 2, version: 90, delta: removed {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}} [2022-11-06T23:57:46,462][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}}, term: 2, version: 90, reason: Publication{term=2, version=90} [2022-11-06T23:57:49,963][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r} join existing leader], term: 2, version: 95, delta: added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}} [2022-11-06T23:57:50,404][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}}, term: 2, version: 95, reason: Publication{term=2, version=95} [2022-11-06T23:58:20,462][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[mediator-server-v7][0]]]). [2022-11-07T00:57:41,772][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultlog-v7][2]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:603) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:187) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:81) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:51) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:155) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:83) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.store.TransportNodesListShardStoreMetadata.list(TransportNodesListShardStoreMetadata.java:95) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.asyncFetch(AsyncShardFetch.java:294) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.fetchData(AsyncShardFetch.java:130) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator$InternalReplicaShardAllocator.fetchData(GatewayAllocator.java:269) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.ReplicaShardAllocator.makeAllocationDecision(ReplicaShardAllocator.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.BaseGatewayShardAllocator.allocateUnassigned(BaseGatewayShardAllocator.java:57) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.innerAllocatedUnassigned(GatewayAllocator.java:156) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.allocateUnassigned(GatewayAllocator.java:143) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.allocateExistingUnassignedShards(AllocationService.java:456) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:428) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:396) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.disassociateDeadNodes(AllocationService.java:258) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.getTaskClusterTasksResult(NodeRemovalClusterStateTaskExecutor.java:97) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.execute(NodeRemovalClusterStateTaskExecutor.java:90) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeNotConnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300] Node not connected at org.elasticsearch.transport.ClusterConnectionManager.getConnection(ClusterConnectionManager.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:673) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:600) ~[elasticsearch-7.9.3.jar:7.9.3] ... 33 more [2022-11-07T00:57:41,868][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} reason: disconnected], term: 2, version: 163, delta: removed {{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}} [2022-11-07T00:57:41,863][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultlog-v7][0]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:603) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:187) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:81) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:51) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:155) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:83) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.store.TransportNodesListShardStoreMetadata.list(TransportNodesListShardStoreMetadata.java:95) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.asyncFetch(AsyncShardFetch.java:294) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.fetchData(AsyncShardFetch.java:130) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator$InternalReplicaShardAllocator.fetchData(GatewayAllocator.java:269) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.ReplicaShardAllocator.makeAllocationDecision(ReplicaShardAllocator.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.BaseGatewayShardAllocator.allocateUnassigned(BaseGatewayShardAllocator.java:57) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.innerAllocatedUnassigned(GatewayAllocator.java:156) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.allocateUnassigned(GatewayAllocator.java:143) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.allocateExistingUnassignedShards(AllocationService.java:456) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:428) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:396) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.disassociateDeadNodes(AllocationService.java:258) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.getTaskClusterTasksResult(NodeRemovalClusterStateTaskExecutor.java:97) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.execute(NodeRemovalClusterStateTaskExecutor.java:90) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeNotConnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300] Node not connected at org.elasticsearch.transport.ClusterConnectionManager.getConnection(ClusterConnectionManager.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:673) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:600) ~[elasticsearch-7.9.3.jar:7.9.3] ... 33 more [2022-11-07T00:57:42,163][WARN ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] failing [node-left[{onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} reason: disconnected]]: failed to commit cluster state version [163] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:295) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.cluster.coordination.Publication.onFaultyNode(Publication.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:70) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 19 more [2022-11-07T00:57:42,166][ERROR][o.e.c.c.Coordinator ] [onap-sdnrdb-master-0] unexpected failure during [node-left] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication$PublicationTarget.onFaultyNode(Publication.java:295) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.lambda$onFaultyNode$2(Publication.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?] at org.elasticsearch.cluster.coordination.Publication.onFaultyNode(Publication.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:70) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3] ... 19 more [2022-11-07T00:57:42,166][WARN ][o.e.g.G.InternalReplicaShardAllocator] [onap-sdnrdb-master-0] [faultlog-v7][1]: failed to list shard for shard_store on node [mAOidzVSQ9ufjdIXDSZKFA] org.elasticsearch.action.FailedNodeException: Failed node [mAOidzVSQ9ufjdIXDSZKFA] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:226) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:147) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:201) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:603) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:187) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:81) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:51) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:155) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:83) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.indices.store.TransportNodesListShardStoreMetadata.list(TransportNodesListShardStoreMetadata.java:95) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.asyncFetch(AsyncShardFetch.java:294) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.AsyncShardFetch.fetchData(AsyncShardFetch.java:130) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator$InternalReplicaShardAllocator.fetchData(GatewayAllocator.java:269) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.ReplicaShardAllocator.makeAllocationDecision(ReplicaShardAllocator.java:163) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.BaseGatewayShardAllocator.allocateUnassigned(BaseGatewayShardAllocator.java:57) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.innerAllocatedUnassigned(GatewayAllocator.java:156) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.gateway.GatewayAllocator.allocateUnassigned(GatewayAllocator.java:143) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.allocateExistingUnassignedShards(AllocationService.java:456) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:428) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:396) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.routing.allocation.AllocationService.disassociateDeadNodes(AllocationService.java:258) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.getTaskClusterTasksResult(NodeRemovalClusterStateTaskExecutor.java:97) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor.execute(NodeRemovalClusterStateTaskExecutor.java:90) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: org.elasticsearch.transport.NodeNotConnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300] Node not connected at org.elasticsearch.transport.ClusterConnectionManager.getConnection(ClusterConnectionManager.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:673) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:600) ~[elasticsearch-7.9.3.jar:7.9.3] ... 33 more [2022-11-07T00:57:42,262][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}], current []}, term: 2, version: 162, reason: becoming candidate: Publication.onCompletion(false) [2022-11-07T00:57:42,663][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}}]} org.elasticsearch.transport.NodeNotConnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300] Node not connected at org.elasticsearch.transport.ClusterConnectionManager.getConnection(ClusterConnectionManager.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:673) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:600) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.sendJoinRequest(JoinHelper.java:296) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.sendJoinRequest(JoinHelper.java:224) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper.java:148) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-07T00:57:42,667][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}}]} org.elasticsearch.transport.NodeNotConnectedException: [onap-sdnrdb-master-2][10.233.66.236:9300] Node not connected at org.elasticsearch.transport.ClusterConnectionManager.getConnection(ClusterConnectionManager.java:189) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:673) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:600) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.sendJoinRequest(JoinHelper.java:296) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.sendJoinRequest(JoinHelper.java:224) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper.java:148) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-07T00:57:43,106][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.71.126:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 4 does not match current term 5 at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-07T00:57:43,449][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.69.148:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-07T00:57:43,536][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr} elect leader, {onap-sdnrdb-master-1}{5IjPjXA-SU2jihsiJcxfHg}{5IohPh_oQemkdElcxjnJBw}{10.233.71.126}{10.233.71.126:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 163, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]} [2022-11-07T00:57:44,514][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 5 does not match current term 7 at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.Optional.ifPresent(Optional.java:183) ~[?:?] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2022-11-07T00:57:44,562][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, minimumTerm=4, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=2, lastAcceptedVersion=162, sourceNode={onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.66.236:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.NotMasterException: Node [{onap-sdnrdb-master-2}{mAOidzVSQ9ufjdIXDSZKFA}{RR2M8IetSna0cCsh-9W00A}{10.233.66.236}{10.233.66.236:9300}{dmr}] not master for join request [2022-11-07T00:57:44,571][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [], current [{onap-sdnrdb-master-0}{puZNMpXyRLegCRgVCl88YQ}{y02WwNW9QeSuyoUWalkNXA}{10.233.69.148}{10.233.69.148:9300}{dmr}]}, term: 7, version: 163, reason: Publication{term=7, version=163} [2022-11-07T00:57:49,302][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r} reason: disconnected], term: 7, version: 165, delta: removed {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}} [2022-11-07T00:57:49,343][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}}, term: 7, version: 165, reason: Publication{term=7, version=165} [2022-11-07T00:57:51,669][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r} join existing leader], term: 7, version: 166, delta: added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}} [2022-11-07T00:57:51,903][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-7c4fc6d7fd-2nv7n}{wLLB_dBMQM25VWHpMsaZBw}{U0mAtuBRSWOyh-8jL829Zg}{10.233.66.15}{10.233.66.15:9300}{r}}, term: 7, version: 166, reason: Publication{term=7, version=166}