Results

By type

           00:10:51.47 
 00:10:51.55 Welcome to the Bitnami elasticsearch container
 00:10:51.56 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch
 00:10:51.57 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues
 00:10:51.66 
 00:10:51.66 INFO  ==> ** Starting Elasticsearch setup **
 00:10:52.15 INFO  ==> Configuring/Initializing Elasticsearch...
 00:10:52.62 INFO  ==> Setting default configuration
 00:10:52.86 INFO  ==> Configuring Elasticsearch cluster settings...
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
 00:11:23.96 INFO  ==> ** Elasticsearch setup finished! **

 00:11:24.25 INFO  ==> ** Starting Elasticsearch **
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2023-02-14T00:12:27,862][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/5.4.0-96-generic/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS]
[2023-02-14T00:12:28,053][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] JVM home [/opt/bitnami/java]
[2023-02-14T00:12:28,055][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-12570544946954651720, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true]
[2023-02-14T00:12:56,059][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [aggs-matrix-stats]
[2023-02-14T00:12:56,060][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [analysis-common]
[2023-02-14T00:12:56,153][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [geo]
[2023-02-14T00:12:56,154][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [ingest-common]
[2023-02-14T00:12:56,155][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [ingest-geoip]
[2023-02-14T00:12:56,155][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [ingest-user-agent]
[2023-02-14T00:12:56,156][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [kibana]
[2023-02-14T00:12:56,254][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [lang-expression]
[2023-02-14T00:12:56,255][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [lang-mustache]
[2023-02-14T00:12:56,255][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [lang-painless]
[2023-02-14T00:12:56,256][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [mapper-extras]
[2023-02-14T00:12:56,257][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [parent-join]
[2023-02-14T00:12:56,257][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [percolator]
[2023-02-14T00:12:56,258][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [rank-eval]
[2023-02-14T00:12:56,258][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [reindex]
[2023-02-14T00:12:56,259][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [repository-url]
[2023-02-14T00:12:56,259][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [tasks]
[2023-02-14T00:12:56,260][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded module [transport-netty4]
[2023-02-14T00:12:56,262][INFO ][o.e.p.PluginsService     ] [onap-sdnrdb-master-2] loaded plugin [repository-s3]
[2023-02-14T00:12:57,465][INFO ][o.e.e.NodeEnvironment    ] [onap-sdnrdb-master-2] using [1] data paths, mounts [[/bitnami/elasticsearch/data (192.168.13.23:/dockerdata-nfs/onap/elastic-master-0)]], net usable_space [95.4gb], net total_space [99.9gb], types [nfs4]
[2023-02-14T00:12:57,467][INFO ][o.e.e.NodeEnvironment    ] [onap-sdnrdb-master-2] heap size [123.7mb], compressed ordinary object pointers [true]
[2023-02-14T00:12:58,558][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] node name [onap-sdnrdb-master-2], node ID [Wi7M45qCTli5JXbVLP9XOQ], cluster name [sdnrdb-cluster]
[2023-02-14T00:14:20,955][INFO ][o.e.t.NettyAllocator     ] [onap-sdnrdb-master-2] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}]
[2023-02-14T00:14:22,463][INFO ][o.e.d.DiscoveryModule    ] [onap-sdnrdb-master-2] using discovery type [zen] and seed hosts providers [settings]
[2023-02-14T00:14:28,264][WARN ][o.e.g.DanglingIndicesState] [onap-sdnrdb-master-2] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2023-02-14T00:14:31,266][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] initialized
[2023-02-14T00:14:31,267][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] starting ...
[2023-02-14T00:14:32,755][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-2] [gc][1] overhead, spent [399ms] collecting in the last [1.3s]
[2023-02-14T00:14:35,059][INFO ][o.e.t.TransportService   ] [onap-sdnrdb-master-2] publish_address {10.233.68.200:9300}, bound_addresses {0.0.0.0:9300}
[2023-02-14T00:14:36,158][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:47267}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:36,360][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:53969}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:36,955][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:54723}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:37,056][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:56975}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:37,956][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:54899}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:38,056][WARN ][o.e.t.TcpTransport       ] [onap-sdnrdb-master-2] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.68.200:9300, remoteAddress=/127.0.0.6:53249}], closing connection
java.lang.IllegalStateException: transport not ready yet to handle incoming requests
	at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2023-02-14T00:14:38,857][INFO ][o.e.b.BootstrapChecks    ] [onap-sdnrdb-master-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-02-14T00:14:43,958][INFO ][o.e.c.c.Coordinator      ] [onap-sdnrdb-master-2] setting initial configuration to VotingConfiguration{L-xoSpdkR0quk0892avHTg,{bootstrap-placeholder}-onap-sdnrdb-master-1,Wi7M45qCTli5JXbVLP9XOQ}
[2023-02-14T00:14:47,960][INFO ][o.e.m.j.JvmGcMonitorService] [onap-sdnrdb-master-2] [gc][14] overhead, spent [505ms] collecting in the last [1.1s]
[2023-02-14T00:14:49,258][INFO ][o.e.c.c.JoinHelper       ] [onap-sdnrdb-master-2] failed to join {onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}]}
org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.67.218:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.cancelActivePublication(Coordinator.java:1160) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.becomeCandidate(Coordinator.java:549) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:462) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper.java:148) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.ElasticsearchException: publication cancelled before committing: become candidate: joinLeaderInTerm
	at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:86) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.cancelActivePublication(Coordinator.java:1160) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.becomeCandidate(Coordinator.java:549) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.joinLeaderInTerm(Coordinator.java:462) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper.java:148) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:263) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) ~[?:?]
[2023-02-14T00:14:49,954][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} elect leader, {onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}, added {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}
[2023-02-14T00:14:51,259][INFO ][o.e.c.c.CoordinationState] [onap-sdnrdb-master-2] cluster UUID set to [w_vJDwT9TO-AJnw5Mfo-_Q]
[2023-02-14T00:14:52,558][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}, added {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}, term: 2, version: 1, reason: Publication{term=2, version=1}
[2023-02-14T00:14:53,157][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} join existing leader, {onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r} join existing leader], term: 2, version: 2, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}
[2023-02-14T00:14:53,554][INFO ][o.e.h.AbstractHttpServerTransport] [onap-sdnrdb-master-2] publish_address {10.233.68.200:9200}, bound_addresses {0.0.0.0:9200}
[2023-02-14T00:14:53,556][INFO ][o.e.n.Node               ] [onap-sdnrdb-master-2] started
[2023-02-14T00:14:54,058][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}, term: 2, version: 2, reason: Publication{term=2, version=2}
[2023-02-14T00:14:54,661][INFO ][o.e.g.GatewayService     ] [onap-sdnrdb-master-2] recovered [0] indices into cluster_state
[2023-02-14T00:14:59,360][INFO ][o.e.c.s.ClusterSettings  ] [onap-sdnrdb-master-2] updating [action.auto_create_index] from [true] to [false]
[2023-02-14T00:15:03,755][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [connectionlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:15:23,958][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [historicalperformance24h-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:15:35,262][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [mediator-server-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:15:47,356][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [eventlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:15:56,961][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [faultcurrent-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:09,276][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [guicutthrough-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:17,257][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [faultlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:26,158][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [cmlog-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:35,357][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [userdata-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:41,357][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [inventoryequipment-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:16:49,962][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} join existing leader], term: 2, version: 66, delta: added {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}
[2023-02-14T00:16:57,038][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}, term: 2, version: 66, reason: Publication{term=2, version=66}
[2023-02-14T00:16:57,358][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [networkelement-connection-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:17:07,657][INFO ][o.e.c.c.C.CoordinatorPublication] [onap-sdnrdb-master-2] after [9.8s] publication of cluster state version [67] is still waiting for {onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} [SENT_APPLY_COMMIT]
[2023-02-14T00:17:44,058][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [historicalperformance15min-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:18:08,066][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-2] [maintenancemode-v7] creating index, cause [api], templates [], shards [5]/[1]
[2023-02-14T00:18:46,529][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[connectionlog-v7][4]]]).
[2023-02-14T01:14:50,313][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r} reason: disconnected], term: 2, version: 143, delta: removed {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}
[2023-02-14T01:14:50,524][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}, term: 2, version: 143, reason: Publication{term=2, version=143}
[2023-02-14T01:14:52,968][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r} join existing leader], term: 2, version: 144, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}
[2023-02-14T01:14:53,619][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}, term: 2, version: 144, reason: Publication{term=2, version=144}
[2023-02-14T01:16:47,166][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} reason: disconnected], term: 2, version: 146, delta: removed {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}
[2023-02-14T01:16:50,734][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}, term: 2, version: 146, reason: Publication{term=2, version=146}
[2023-02-14T01:16:51,461][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [userdata-v7][2] primary-replica resync completed with 0 operations
[2023-02-14T01:16:51,553][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [faultcurrent-v7][2] primary-replica resync completed with 0 operations
[2023-02-14T01:16:51,654][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [faultlog-v7][2] primary-replica resync completed with 0 operations
[2023-02-14T01:16:51,855][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [mediator-server-v7][2] primary-replica resync completed with 0 operations
[2023-02-14T01:16:51,968][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [maintenancemode-v7][3] primary-replica resync completed with 0 operations
[2023-02-14T01:16:52,064][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [maintenancemode-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T01:16:52,262][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [historicalperformance15min-v7][3] primary-replica resync completed with 0 operations
[2023-02-14T01:16:52,358][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [historicalperformance15min-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T01:16:52,458][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [54.2s] (43 delayed shards)
[2023-02-14T01:16:52,459][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [networkelement-connection-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T01:16:52,461][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [networkelement-connection-v7][3] primary-replica resync completed with 0 operations
[2023-02-14T01:16:53,061][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} reason: disconnected], term: 2, version: 147, delta: removed {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}
[2023-02-14T01:16:53,164][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [connectionlog-v7][2] primary-replica resync completed with 0 operations
[2023-02-14T01:16:53,356][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}], current []}, term: 2, version: 146, reason: becoming candidate: Publication.onCompletion(false)
[2023-02-14T01:16:53,455][WARN ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] failing [node-left[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} reason: disconnected]]: failed to commit cluster state version [147]
org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3]
	... 14 more
[2023-02-14T01:16:53,462][ERROR][o.e.c.c.Coordinator      ] [onap-sdnrdb-master-2] unexpected failure during [node-left]
org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3]
	... 14 more
[2023-02-14T01:16:53,870][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} elect leader, {onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 147, delta: master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}
[2023-02-14T01:16:55,131][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}, term: 3, version: 147, reason: Publication{term=3, version=147}
[2023-02-14T01:16:55,158][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} join existing leader], term: 3, version: 148, delta: added {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}
[2023-02-14T01:16:58,095][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}, term: 3, version: 148, reason: Publication{term=3, version=148}
[2023-02-14T01:18:17,756][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[connectionlog-v7][3]]]).
[2023-02-14T02:14:52,880][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r} reason: disconnected], term: 3, version: 217, delta: removed {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}
[2023-02-14T02:14:53,091][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}, term: 3, version: 217, reason: Publication{term=3, version=217}
[2023-02-14T02:14:55,754][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r} join existing leader], term: 3, version: 218, delta: added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}
[2023-02-14T02:14:56,195][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-coordinating-only-54c85fc557-4ffjw}{hPA807HGQnuDqhK-aHABlg}{PLDbn1YHRva4zPPW8ye0tg}{10.233.68.236}{10.233.68.236:9300}{r}}, term: 3, version: 218, reason: Publication{term=3, version=218}
[2023-02-14T02:16:50,964][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} reason: disconnected], term: 3, version: 220, delta: removed {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}
[2023-02-14T02:16:53,592][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] removed {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}, term: 3, version: 220, reason: Publication{term=3, version=220}
[2023-02-14T02:16:54,460][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [cmlog-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T02:16:54,462][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [faultcurrent-v7][0] primary-replica resync completed with 0 operations
[2023-02-14T02:16:54,662][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [inventoryequipment-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T02:16:54,864][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [faultlog-v7][0] primary-replica resync completed with 0 operations
[2023-02-14T02:16:54,965][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [mediator-server-v7][0] primary-replica resync completed with 0 operations
[2023-02-14T02:16:55,058][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [historicalperformance15min-v7][4] primary-replica resync completed with 0 operations
[2023-02-14T02:16:55,171][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [guicutthrough-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T02:16:55,259][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [historicalperformance24h-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T02:16:55,463][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [connectionlog-v7][0] primary-replica resync completed with 0 operations
[2023-02-14T02:16:55,663][INFO ][o.e.c.r.DelayedAllocationService] [onap-sdnrdb-master-2] scheduling reroute for delayed shards in [55.2s] (44 delayed shards)
[2023-02-14T02:16:55,958][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [eventlog-v7][1] primary-replica resync completed with 0 operations
[2023-02-14T02:16:56,054][INFO ][o.e.i.s.IndexShard       ] [onap-sdnrdb-master-2] [userdata-v7][0] primary-replica resync completed with 0 operations
[2023-02-14T02:16:56,154][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-left[{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} reason: disconnected], term: 3, version: 221, delta: removed {{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr}}
[2023-02-14T02:16:56,160][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}], current []}, term: 3, version: 220, reason: becoming candidate: Publication.onCompletion(false)
[2023-02-14T02:16:56,159][WARN ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] failing [node-left[{onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} reason: disconnected]]: failed to commit cluster state version [221]
org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3]
	... 14 more
[2023-02-14T02:16:56,261][ERROR][o.e.c.c.Coordinator      ] [onap-sdnrdb-master-2] unexpected failure during [node-left]
org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1467) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:226) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1390) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Publication.start(Publication.java:72) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1115) ~[elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: non-failed nodes do not form a quorum
	at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:171) ~[elasticsearch-7.9.3.jar:7.9.3]
	... 14 more
[2023-02-14T02:16:56,955][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr} elect leader, {onap-sdnrdb-master-1}{XqjBOtnbQ4iuXNmAnHX9Iw}{464BuBFXRJa5EpWFlzrJFg}{10.233.69.110}{10.233.69.110:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 4, version: 221, delta: master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}
[2023-02-14T02:16:57,546][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] master node changed {previous [], current [{onap-sdnrdb-master-2}{Wi7M45qCTli5JXbVLP9XOQ}{lLTVxHFvQpetRYEIfioGLQ}{10.233.68.200}{10.233.68.200:9300}{dmr}]}, term: 4, version: 221, reason: Publication{term=4, version=221}
[2023-02-14T02:16:57,552][INFO ][o.e.c.s.MasterService    ] [onap-sdnrdb-master-2] node-join[{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr} join existing leader], term: 4, version: 222, delta: added {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}
[2023-02-14T02:16:58,946][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-2] added {{onap-sdnrdb-master-0}{L-xoSpdkR0quk0892avHTg}{8h5CCHkDQyqPVIohSKSseQ}{10.233.67.218}{10.233.67.218:9300}{dmr}}, term: 4, version: 222, reason: Publication{term=4, version=222}
[2023-02-14T02:17:56,046][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[connectionlog-v7][0]]]).