Results

By type

           23:08:22.89 
 23:08:22.89 Welcome to the Bitnami elasticsearch container
 23:08:22.90 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch
 23:08:22.90 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues
 23:08:22.99 
 23:08:22.99 INFO  ==> ** Starting Elasticsearch setup **
 23:08:23.20 INFO  ==> Configuring/Initializing Elasticsearch...
 23:08:23.49 INFO  ==> Setting default configuration
 23:08:23.59 INFO  ==> Configuring Elasticsearch cluster settings...
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
 23:08:42.20 INFO  ==> ** Elasticsearch setup finished! **

 23:08:42.38 INFO  ==> ** Starting Elasticsearch **
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2023-03-21T23:09:35,189][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/5.4.0-96-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS]
[2023-03-21T23:09:35,192][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] JVM home [/opt/bitnami/java]
[2023-03-21T23:09:35,192][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-1176925723883212929, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true]
[2023-03-21T23:10:01,888][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [aggs-matrix-stats]
[2023-03-21T23:10:01,890][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [analysis-common]
[2023-03-21T23:10:01,891][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [geo]
[2023-03-21T23:10:01,891][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [ingest-common]
[2023-03-21T23:10:01,892][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [ingest-geoip]
[2023-03-21T23:10:01,893][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [ingest-user-agent]
[2023-03-21T23:10:01,893][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [kibana]
[2023-03-21T23:10:01,985][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [lang-expression]
[2023-03-21T23:10:01,986][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [lang-mustache]
[2023-03-21T23:10:01,987][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [lang-painless]
[2023-03-21T23:10:01,988][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [mapper-extras]
[2023-03-21T23:10:01,988][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [parent-join]
[2023-03-21T23:10:01,989][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [percolator]
[2023-03-21T23:10:01,990][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [rank-eval]
[2023-03-21T23:10:01,991][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [reindex]
[2023-03-21T23:10:01,991][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [repository-url]
[2023-03-21T23:10:01,992][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [tasks]
[2023-03-21T23:10:01,993][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded module [transport-netty4]
[2023-03-21T23:10:02,085][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-0] loaded plugin [repository-s3]
[2023-03-21T23:10:03,289][INFO ][o.e.e.NodeEnvironment    ] [onap-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (192.168.13.15:/dockerdata-nfs/onap/elastic-master-1)]], net usable_space [95.4gb], net total_space [99.9gb], types [nfs4]
[2023-03-21T23:10:03,290][INFO ][o.e.e.NodeEnvironment    ] [onap-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true]
[2023-03-21T23:10:04,287][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] node name [onap-sdnrdb-master-0], node ID [N6eyFCXXTuW4WlGl5QQjvw], cluster name [sdnrdb-cluster]
[2023-03-21T23:11:13,097][INFO ][o.e.t.NettyAllocator     ] [onap-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}]
[2023-03-21T23:11:15,085][INFO ][o.e.d.DiscoveryModule    ] [onap-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings]
[2023-03-21T23:11:20,093][WARN ][o.e.g.DanglingIndicesState] [onap-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2023-03-21T23:11:21,597][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] initialized
[2023-03-21T23:11:21,598][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] starting ...
[2023-03-21T23:11:22,884][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-0] [gc][1] overhead, spent [298ms] collecting in the last [1s]
[2023-03-21T23:11:24,585][INFO ][o.e.t.TransportService   ] [onap-sdnrdb-master-0] publish_address {10.233.65.89:9300}, bound_addresses {0.0.0.0:9300}
[2023-03-21T23:11:26,286][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:53681}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:26,988][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:46111}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:27,987][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:46365}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:28,986][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:55161}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:29,985][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:50319}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:30,988][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.65.89:9300, remoteAddress=/127.0.0.6:57189}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:11:31,097][INFO ][o.e.b.BootstrapChecks    ] [onap-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-03-21T23:11:34,189][WARN ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-0] [gc][12] overhead, spent [609ms] collecting in the last [1s]
[2023-03-21T23:11:41,498][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]; discovery will continue using [10.233.65.19:9300, 10.233.67.245:9300, 10.233.74.100:9300] from hosts providers and [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2023-03-21T23:11:51,503][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]; discovery will continue using [10.233.65.19:9300, 10.233.67.245:9300, 10.233.74.100:9300] from hosts providers and [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2023-03-21T23:12:01,511][WARN ][o.e.c.c.ClusterFormationFailureHelper] [onap-sdnrdb-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [onap-sdnrdb-master-0, onap-sdnrdb-master-1, onap-sdnrdb-master-2] to bootstrap a cluster: have discovered [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]; discovery will continue using [10.233.65.19:9300, 10.233.67.245:9300, 10.233.74.100:9300] from hosts providers and [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2023-03-21T23:12:03,324][INFO ][o.e.c.c.Coordinator      ] [onap-sdnrdb-master-0] setting initial configuration to VotingConfiguration{{bootstrap-placeholder}-onap-sdnrdb-master-2,k-BO_lN-Q-GOcXFuIDfi6A,N6eyFCXXTuW4WlGl5QQjvw}
[2023-03-21T23:12:07,661][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.65.89:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}
	at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:08,787][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-21T23:12:08,793][WARN ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] failing [elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: failed to commit cluster state version [1]
org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 3 while handling publication
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:08,887][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=1, optionalJoin=Optional[Join{term=2, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.65.89:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 3 while handling publication
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:09,962][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=4, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.65.89:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}
	at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:10,115][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.67.245:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 4 does not match current term 5
	at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.ClusterConnectionManager.connectToNode(ClusterConnectionManager.java:120) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:378) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:10,565][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=6, optionalJoin=Optional[Join{term=7, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.65.89:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}
	at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:11,157][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=5, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-2][10.233.74.100:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 7 while handling publication
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:11,222][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.67.245:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-03-21T23:12:11,390][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] elected-as-master ([3] nodes joined)[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} elect leader, {onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 8, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-21T23:12:13,284][INFO ][o.e.c.c.CoordinationState] [onap-sdnrdb-master-0] cluster UUID set to [mZ3guWi3Ry21Cjd8fvBjqQ]
[2023-03-21T23:12:14,163][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [], current [{onap-sdnrdb-master-0}{N6eyFCXXTuW4WlGl5QQjvw}{Ea13mX_IQKOjfEsXBIMCow}{10.233.65.89}{10.233.65.89:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 1, reason: Publication{term=8, version=1}
[2023-03-21T23:12:14,591][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader, {onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} join existing leader, {onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 2, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}
[2023-03-21T23:12:14,593][INFO ][o.e.h.AbstractHttpServerTransport] [onap-sdnrdb-master-0] publish_address {10.233.65.89:9200}, bound_addresses {0.0.0.0:9200}
[2023-03-21T23:12:14,593][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-0] started
[2023-03-21T23:12:15,410][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}, term: 8, version: 2, reason: Publication{term=8, version=2}
[2023-03-21T23:12:16,689][INFO ][o.e.g.GatewayService     ] [onap-sdnrdb-master-0] recovered [0] indices into cluster_state
[2023-03-21T23:12:26,997][INFO ][o.e.c.s.ClusterSettings  ] [onap-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false]
[2023-03-21T23:12:30,996][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [maintenancemode-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:12:48,887][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [connectionlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:12:57,591][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:04,995][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [guicutthrough-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:13,193][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [userdata-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:22,088][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultcurrent-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:28,284][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [cmlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:35,386][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [inventoryequipment-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:43,145][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [mediator-server-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:49,290][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [networkelement-connection-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:13:55,307][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance15min-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:14:01,411][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance24h-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:14:09,585][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [eventlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-03-21T23:14:16,239][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[eventlog-v7][4], [eventlog-v7][2], [eventlog-v7][1], [eventlog-v7][0]]]).
[2023-03-22T00:12:05,894][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 92, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:10,613][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 92, reason: Publication{term=8, version=92}
[2023-03-22T00:12:11,297][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [networkelement-connection-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:12:11,485][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [eventlog-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:12:11,616][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [cmlog-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:12:11,698][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [53.9s] (43 delayed shards)
[2023-03-22T00:12:11,787][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 93, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:11,989][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [maintenancemode-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:12:12,189][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [guicutthrough-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:12:12,265][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 93, reason: Publication{term=8, version=93}
[2023-03-22T00:12:12,992][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected, {onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} reason: disconnected], term: 8, version: 94, delta: removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:13,420][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 94, reason: Publication{term=8, version=94}
[2023-03-22T00:12:13,426][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 95, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:13,817][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 95, reason: Publication{term=8, version=95}
[2023-03-22T00:12:13,894][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 96, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:14,311][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 96, reason: Publication{term=8, version=96}
[2023-03-22T00:12:14,848][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 97, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T00:12:16,852][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 97, reason: Publication{term=8, version=97}
[2023-03-22T00:12:16,860][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} join existing leader], term: 8, version: 98, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}
[2023-03-22T00:12:17,215][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}, term: 8, version: 98, reason: Publication{term=8, version=98}
[2023-03-22T00:13:16,485][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][3]]]).
[2023-03-22T00:14:13,786][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} reason: disconnected], term: 8, version: 164, delta: removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T00:14:20,064][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 164, reason: Publication{term=8, version=164}
[2023-03-22T00:14:20,217][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [historicalperformance15min-v7][2] primary-replica resync completed with 0 operations
[2023-03-22T00:14:20,396][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [faultcurrent-v7][4] primary-replica resync completed with 0 operations
[2023-03-22T00:14:20,505][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [faultcurrent-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:14:20,627][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [eventlog-v7][0] primary-replica resync completed with 0 operations
[2023-03-22T00:14:20,791][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [userdata-v7][2] primary-replica resync completed with 0 operations
[2023-03-22T00:14:20,896][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [inventoryequipment-v7][2] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,010][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [networkelement-connection-v7][0] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,116][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [mediator-server-v7][4] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,212][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [mediator-server-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,386][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [cmlog-v7][0] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,390][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [connectionlog-v7][2] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,790][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [faultlog-v7][4] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,912][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [faultlog-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:14:21,994][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [historicalperformance24h-v7][4] primary-replica resync completed with 0 operations
[2023-03-22T00:14:22,085][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [51.6s] (44 delayed shards)
[2023-03-22T00:14:22,089][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [historicalperformance24h-v7][1] primary-replica resync completed with 0 operations
[2023-03-22T00:14:22,090][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 165, delta: added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T00:14:22,192][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [maintenancemode-v7][0] primary-replica resync completed with 0 operations
[2023-03-22T00:14:22,304][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-0] [guicutthrough-v7][0] primary-replica resync completed with 0 operations
[2023-03-22T00:14:22,631][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 165, reason: Publication{term=8, version=165}
[2023-03-22T00:14:22,722][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} reason: disconnected], term: 8, version: 166, delta: removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T00:14:23,109][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 166, reason: Publication{term=8, version=166}
[2023-03-22T00:14:23,411][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 167, delta: added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T00:14:26,042][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 167, reason: Publication{term=8, version=167}
[2023-03-22T00:15:21,261][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][2]]]).
[2023-03-22T01:12:14,787][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 237, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:18,528][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 237, reason: Publication{term=8, version=237}
[2023-03-22T01:12:18,533][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [56.2s] (43 delayed shards)
[2023-03-22T01:12:18,585][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} reason: disconnected], term: 8, version: 238, delta: removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}
[2023-03-22T01:12:18,808][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}, term: 8, version: 238, reason: Publication{term=8, version=238}
[2023-03-22T01:12:18,812][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 239, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:18,957][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 239, reason: Publication{term=8, version=239}
[2023-03-22T01:12:18,998][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 240, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:19,596][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 240, reason: Publication{term=8, version=240}
[2023-03-22T01:12:19,602][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 241, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:20,035][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 241, reason: Publication{term=8, version=241}
[2023-03-22T01:12:20,187][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 242, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:20,350][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 242, reason: Publication{term=8, version=242}
[2023-03-22T01:12:20,355][WARN ][o.e.c.c.Coordinator      ] [onap-sdnrdb-master-0] failed to validate incoming join request from node [{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}]
org.elasticsearch.transport.NodeDisconnectedException: [onap-sdnrdb-master-2][10.233.74.100:9300][internal:cluster/coordination/join/validate] disconnected
[2023-03-22T01:12:20,994][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} join existing leader], term: 8, version: 243, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}
[2023-03-22T01:12:21,119][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}, term: 8, version: 243, reason: Publication{term=8, version=243}
[2023-03-22T01:12:21,392][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 244, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T01:12:23,281][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 244, reason: Publication{term=8, version=244}
[2023-03-22T01:13:18,209][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][3]]]).
[2023-03-22T01:14:23,232][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} reason: disconnected], term: 8, version: 311, delta: removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T01:14:26,657][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 311, reason: Publication{term=8, version=311}
[2023-03-22T01:14:26,660][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [56.5s] (44 delayed shards)
[2023-03-22T01:14:26,663][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 312, delta: added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T01:14:26,946][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 312, reason: Publication{term=8, version=312}
[2023-03-22T01:14:27,033][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} reason: disconnected], term: 8, version: 313, delta: removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T01:14:27,248][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 313, reason: Publication{term=8, version=313}
[2023-03-22T01:14:27,813][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 314, delta: added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T01:14:29,748][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 314, reason: Publication{term=8, version=314}
[2023-03-22T01:15:14,312][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][2]]]).
[2023-03-22T02:12:20,974][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} reason: disconnected], term: 8, version: 385, delta: removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}
[2023-03-22T02:12:21,102][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r}}, term: 8, version: 385, reason: Publication{term=8, version=385}
[2023-03-22T02:12:21,385][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 386, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:25,030][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 386, reason: Publication{term=8, version=386}
[2023-03-22T02:12:25,035][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [56.3s] (43 delayed shards)
[2023-03-22T02:12:25,038][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader, {onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r} join existing leader], term: 8, version: 387, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:25,312][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-54c85fc557-4wjdk}{5fTGmkZWTPOvll9So6ygJA}{pXZvGnhJSueu_2GYOLoYOA}{10.233.65.19}{10.233.65.19:9300}{r},{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 387, reason: Publication{term=8, version=387}
[2023-03-22T02:12:25,396][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 388, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:25,613][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 388, reason: Publication{term=8, version=388}
[2023-03-22T02:12:25,617][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 389, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:25,808][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 389, reason: Publication{term=8, version=389}
[2023-03-22T02:12:25,889][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} reason: disconnected], term: 8, version: 390, delta: removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:26,186][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 390, reason: Publication{term=8, version=390}
[2023-03-22T02:12:26,885][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr} join existing leader], term: 8, version: 391, delta: added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}
[2023-03-22T02:12:28,468][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{yYZvPog9RrW9eiVaMUpFAA}{UMXz6q_SST6NRybdVeRuhg}{10.233.74.100}{10.233.74.100:9300}{dmr}}, term: 8, version: 391, reason: Publication{term=8, version=391}
[2023-03-22T02:13:09,281][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][4]]]).
[2023-03-22T02:14:27,789][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-left[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} reason: disconnected], term: 8, version: 458, delta: removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T02:14:30,354][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] removed {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 458, reason: Publication{term=8, version=458}
[2023-03-22T02:14:30,361][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-0] scheduling reroute for delayed shards in [57.4s] (44 delayed shards)
[2023-03-22T02:14:30,723][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr} join existing leader], term: 8, version: 459, delta: added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}
[2023-03-22T02:14:32,243][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-1}{k-BO_lN-Q-GOcXFuIDfi6A}{o5WCi3RZRJSmVtcrZyjnGw}{10.233.67.245}{10.233.67.245:9300}{dmr}}, term: 8, version: 459, reason: Publication{term=8, version=459}
[2023-03-22T02:15:14,108][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-0] [gc][11024] overhead, spent [408ms] collecting in the last [1s]
[2023-03-22T02:15:16,276][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[maintenancemode-v7][3]]]).