23:40:31.56   23:40:31.57 Welcome to the Bitnami elasticsearch container  23:40:31.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch  23:40:31.68 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues  23:40:31.76   23:40:31.77 INFO  ==> ** Starting Elasticsearch setup **  23:40:32.27 INFO  ==> Configuring/Initializing Elasticsearch...  23:40:32.87 INFO  ==> Setting default configuration  23:40:33.07 INFO  ==> Configuring Elasticsearch cluster settings... OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.  23:40:51.86 INFO  ==> ** Elasticsearch setup finished! **  23:40:52.17 INFO  ==> ** Starting Elasticsearch ** OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [2021-06-01T23:41:36,262][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] version[7.9.3], pid[1], build[oss/tar/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Linux/4.19.0-13-cloud-amd64/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.9/11.0.9+11-LTS] [2021-06-01T23:41:36,264][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] JVM home [/opt/bitnami/java] [2021-06-01T23:41:36,264][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms128m, -Xmx128m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-8973552552916910364, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=67108864, -Des.path.home=/opt/bitnami/elasticsearch, -Des.path.conf=/opt/bitnami/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2021-06-01T23:41:54,967][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [aggs-matrix-stats] [2021-06-01T23:41:54,967][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [analysis-common] [2021-06-01T23:41:54,968][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [geo] [2021-06-01T23:41:54,968][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-common] [2021-06-01T23:41:54,968][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-geoip] [2021-06-01T23:41:54,968][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [ingest-user-agent] [2021-06-01T23:41:54,969][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [kibana] [2021-06-01T23:41:54,969][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-expression] [2021-06-01T23:41:54,969][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-mustache] [2021-06-01T23:41:54,970][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [lang-painless] [2021-06-01T23:41:54,970][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [mapper-extras] [2021-06-01T23:41:54,970][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [parent-join] [2021-06-01T23:41:54,970][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [percolator] [2021-06-01T23:41:54,971][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [rank-eval] [2021-06-01T23:41:54,971][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [reindex] [2021-06-01T23:41:54,971][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [repository-url] [2021-06-01T23:41:54,971][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [tasks] [2021-06-01T23:41:54,972][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded module [transport-netty4] [2021-06-01T23:41:55,056][INFO ][o.e.p.PluginsService ] [onap-sdnrdb-master-0] loaded plugin [repository-s3] [2021-06-01T23:41:56,058][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-0] using [1] data paths, mounts [[/bitnami/elasticsearch/data (10.253.0.178:/dockerdata-nfs/onap/elastic-master-2)]], net usable_space [93.1gb], net total_space [99.9gb], types [nfs4] [2021-06-01T23:41:56,058][INFO ][o.e.e.NodeEnvironment ] [onap-sdnrdb-master-0] heap size [123.7mb], compressed ordinary object pointers [true] [2021-06-01T23:41:56,765][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] node name [onap-sdnrdb-master-0], node ID [gWyM2ErmT6G35trF4-Bl3Q], cluster name [sdnrdb-cluster] [2021-06-01T23:42:46,062][INFO ][o.e.t.NettyAllocator ] [onap-sdnrdb-master-0] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=123.7mb}] [2021-06-01T23:42:47,065][INFO ][o.e.d.DiscoveryModule ] [onap-sdnrdb-master-0] using discovery type [zen] and seed hosts providers [settings] [2021-06-01T23:42:51,663][WARN ][o.e.g.DanglingIndicesState] [onap-sdnrdb-master-0] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-06-01T23:42:54,060][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] initialized [2021-06-01T23:42:54,061][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] starting ... [2021-06-01T23:42:55,862][INFO ][o.e.t.TransportService ] [onap-sdnrdb-master-0] publish_address {10.233.73.21:9300}, bound_addresses {0.0.0.0:9300} [2021-06-01T23:42:57,363][WARN ][o.e.t.TcpTransport ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.73.21:9300, remoteAddress=/10.233.67.101:57378}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:42:58,157][WARN ][o.e.t.TcpTransport ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.73.21:9300, remoteAddress=/10.233.67.101:57394}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:42:59,158][WARN ][o.e.t.TcpTransport ] [onap-sdnrdb-master-0] exception caught on transport layer [Netty4TcpChannel{localAddress=/10.233.73.21:9300, remoteAddress=/10.233.67.101:57408}], closing connection java.lang.IllegalStateException: transport not ready yet to handle incoming requests at org.elasticsearch.transport.TransportService.onRequestReceived(TransportService.java:943) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:136) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:93) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:78) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:692) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:76) [transport-netty4-client-7.9.3.jar:7.9.3] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) [netty-handler-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:42:59,761][INFO ][o.e.b.BootstrapChecks ] [onap-sdnrdb-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-06-01T23:43:02,261][INFO ][o.e.c.c.Coordinator ] [onap-sdnrdb-master-0] setting initial configuration to VotingConfiguration{{bootstrap-placeholder}-onap-sdnrdb-master-2,gWyM2ErmT6G35trF4-Bl3Q,Y-kS5i3tQXWl_CKV2fXYYQ} [2021-06-01T23:43:05,456][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, minimumTerm=2, optionalJoin=Optional[Join{term=3, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.73.21:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:43:05,759][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr}} [2021-06-01T23:43:06,061][WARN ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] failing [elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: failed to commit cluster state version [1] org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 3 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:43:06,063][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, minimumTerm=0, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.73.21:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 3 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:43:06,265][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, minimumTerm=3, optionalJoin=Optional[Join{term=4, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, targetNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-0][10.233.73.21:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: received a newer join from {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} at org.elasticsearch.cluster.coordination.JoinHelper$CandidateJoinAccumulator.handleJoinRequest(JoinHelper.java:459) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:533) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:375) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:362) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:136) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:800) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:43:06,628][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] elected-as-master ([2] nodes joined)[{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr} elect leader, {onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 6, version: 1, delta: master node changed {previous [], current [{onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr}} [2021-06-01T23:43:07,357][INFO ][o.e.c.c.JoinHelper ] [onap-sdnrdb-master-0] failed to join {onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr} with JoinRequest{sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, minimumTerm=4, optionalJoin=Optional[Join{term=5, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}, targetNode={onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr}}]} org.elasticsearch.transport.RemoteTransportException: [onap-sdnrdb-master-1][10.233.67.14:9300][internal:cluster/coordination/join] Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 5 while handling publication at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.9.3.jar:7.9.3] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.9.3.jar:7.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] [2021-06-01T23:43:07,831][INFO ][o.e.c.c.CoordinationState] [onap-sdnrdb-master-0] cluster UUID set to [B1vRli6bReCgrqj2ZWu1rA] [2021-06-01T23:43:08,467][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] master node changed {previous [], current [{onap-sdnrdb-master-0}{gWyM2ErmT6G35trF4-Bl3Q}{jp3YMD-xQxqqrOWYZF3q-g}{10.233.73.21}{10.233.73.21:9300}{dmr}]}, added {{onap-sdnrdb-master-1}{Y-kS5i3tQXWl_CKV2fXYYQ}{31p_Sp_DT1eREC4NcF7A5g}{10.233.67.14}{10.233.67.14:9300}{dmr}}, term: 6, version: 1, reason: Publication{term=6, version=1} [2021-06-01T23:43:08,860][INFO ][o.e.h.AbstractHttpServerTransport] [onap-sdnrdb-master-0] publish_address {10.233.73.21:9200}, bound_addresses {0.0.0.0:9200} [2021-06-01T23:43:08,861][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-coordinating-only-56dfdc4d57-crqw7}{gq_zPGluTg2MqZ6Xqbc44A}{sBE_9mrSSYa3S7SgXsDbxg}{10.233.67.101}{10.233.67.101:9300}{r} join existing leader], term: 6, version: 2, delta: added {{onap-sdnrdb-coordinating-only-56dfdc4d57-crqw7}{gq_zPGluTg2MqZ6Xqbc44A}{sBE_9mrSSYa3S7SgXsDbxg}{10.233.67.101}{10.233.67.101:9300}{r}} [2021-06-01T23:43:08,956][INFO ][o.e.n.Node ] [onap-sdnrdb-master-0] started [2021-06-01T23:43:09,185][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-coordinating-only-56dfdc4d57-crqw7}{gq_zPGluTg2MqZ6Xqbc44A}{sBE_9mrSSYa3S7SgXsDbxg}{10.233.67.101}{10.233.67.101:9300}{r}}, term: 6, version: 2, reason: Publication{term=6, version=2} [2021-06-01T23:43:09,562][INFO ][o.e.g.GatewayService ] [onap-sdnrdb-master-0] recovered [0] indices into cluster_state [2021-06-01T23:44:34,579][INFO ][o.e.c.s.MasterService ] [onap-sdnrdb-master-0] node-join[{onap-sdnrdb-master-2}{b67cJXDBSKe3pjUYHTwFPg}{4Nq5UKEbSDu_ksbvP91HDA}{10.233.69.139}{10.233.69.139:9300}{dmr} join existing leader], term: 6, version: 4, delta: added {{onap-sdnrdb-master-2}{b67cJXDBSKe3pjUYHTwFPg}{4Nq5UKEbSDu_ksbvP91HDA}{10.233.69.139}{10.233.69.139:9300}{dmr}} [2021-06-01T23:44:36,076][INFO ][o.e.c.s.ClusterApplierService] [onap-sdnrdb-master-0] added {{onap-sdnrdb-master-2}{b67cJXDBSKe3pjUYHTwFPg}{4Nq5UKEbSDu_ksbvP91HDA}{10.233.69.139}{10.233.69.139:9300}{dmr}}, term: 6, version: 4, reason: Publication{term=6, version=4} [2021-06-01T23:46:00,663][INFO ][o.e.c.s.ClusterSettings ] [onap-sdnrdb-master-0] updating [action.auto_create_index] from [true] to [false] [2021-06-01T23:46:03,668][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultcurrent-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:18,163][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [networkelement-connection-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:24,367][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [guicutthrough-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:31,158][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance15min-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:36,865][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [maintenancemode-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:43,859][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [historicalperformance24h-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:47,265][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [mediator-server-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:54,058][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [connectionlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:46:58,259][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [eventlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:47:02,062][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [inventoryequipment-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:47:07,162][INFO ][o.e.c.m.MetadataCreateIndexService] [onap-sdnrdb-master-0] [faultlog-v5] creating index, cause [api], templates [], shards [5]/[1] [2021-06-01T23:47:12,082][INFO ][o.e.c.r.a.AllocationService] [onap-sdnrdb-master-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[faultlog-v5][4], [faultlog-v5][0], [faultlog-v5][1], [faultlog-v5][2]]]).