On 12/23/2014 4:34 AM, Modassar Ather wrote:
> Hi,
>
> I have a setup of 4 shard Solr cluster with embedded zookeeper on one of
> them. The zkClient time out is set to 30 seconds, -Xms is 20g and -Xms is
> 24g.
> When executing a huge query with many wildcards inside it the server
> crashes and becomes non-responsive. Even the dashboard does not responds
> and shows connection lost error. This requires me to restart the servers.

Here's the important part of your message:

*Caused by: java.lang.OutOfMemoryError: Java heap space*


Your heap is not big enough for what Solr has been asked to do.  You
need to either increase your heap size or change your configuration so
that it uses less memory.

http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

Most programs have pretty much undefined behavior when an OOME occurs. 
Lucene's IndexWriter has been hardened so that it tries extremely hard
to avoid index corruption when OOME strikes, and I believe that works
well enough that we can call it nearly bulletproof ... but the rest of
Lucene and Solr will make no guarantees.

It's very difficult to have definable program behavior when an OOME
happens, because you simply cannot know the precise point during program
execution where it will happen, or what isn't going to work because Java
did not have memory space to create an object.  Going unresponsive is
not surprising.

If you can solve your heap problem, note that you may run into other
performance issues discussed on the wiki page that I linked.

Thanks,
Shawn

Reply via email to