First, that resource leak is worrying. Is there any way you could take a stack 
trace and/or memory dump? I suppose it’d be easy enough to simulate. It’s 
particularly worrying because SolrJ is how Solr<->Solr communications happen so 
if there really is more than transitory leak that’d affect Solr as well.

Second, I don’t think there’s a method that does exactly what you want, 
anything like “isOneNodeHealthyForEachSlice()”, but there is 
DocCollection.getActiveSlices() so I guess you could ask if 
(docCollection.getActiveSlices().length == known_number_of_shards)…

Best
Erick

> On Mar 1, 2019, at 2:24 PM, Webster Homer <webster.ho...@milliporesigma.com> 
> wrote:
> 
> I am using the CloudSolrClient Solrj api for querying solr cloud collections. 
> For the most part it works well. However we recently experienced a series of 
> outages where our production cloud became unavailable. All the nodes were 
> down. That's a separate topic... The client application tried to launch 
> searches but always experienced a SolrServerException that there were no live 
> nodes available. After a few hundred such exceptions, the application ran out 
> of memory and failed when trying to allocate a thread... I'm not sure where 
> the resources are being leaked in exception handling. Is there a way to ask 
> the CloudSolrClient if there are enough replicas to execute the search.
> 
> I'm using Solr 7.2

Reply via email to