Sorry Erick, forgot to answer your question: No, I didn't increase the maxWarmingSearchers. It is set to <maxWarmingSearchers>2</maxWarmingSearchers>. I read it somewhere that increasing this is a risk.
Just to make sure, you didn't mean the "autowarmCount " in the <queryResultCache,? That is set to 32. Thanks, Tim Reference: <filterCache class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount="0"/> <!-- Query Result Cache Caches results of searches - ordered lists of document ids (DocList) based on a query, a sort, and the range of documents requested. --> <queryResultCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="32"/> <!-- Document Cache Caches Lucene Document objects (the stored fields for each document). Since Lucene internal document ids are transient, this cache will not be autowarmed. --> <documentCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/> -----Original Message----- From: Erick Erickson [mailto:erickerick...@gmail.com] Sent: Saturday, 6 August 2016 2:31 AM To: solr-user Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory You don't really have to worry that much about memory consumed during indexing. The ramBufferSizeMB setting in solrconfig.xml pretty much limits the amount of RAM consumed, when adding a doc if that limit is exceeded then the buffer is flushed. So you can reduce that number, but it's default is 100M and if you're running that close to your limits I suspect you'd get, at best, a bit more runway before you hit the problem again. NOTE: that number isn't an absolute limit, IIUC the algorithm is > index a doc to the in-memory structures check if the limit is exceeded > and flush if so. So say you were at 99% of your ramBufferSizeMB setting and then indexed a ginormous doc your in-memory stuff might be significantly bigger. Searching usually is the bigger RAM consumer, so when I say "a bit more runway" what I'm thinking about is that when you start _searching_ the data your memory requirements will continue to grow and you'll be back where you started. And just as a sanity check: You didn't perchance increase the maxWarmingSearchers parameter in solrconfig.xml, did you? If so, that's really a red flag. Best, Erick On Fri, Aug 5, 2016 at 12:41 AM, Tim Chen <tim.c...@sbs.com.au> wrote: > Thanks Guys. Very very helpful. > > I will probably look at consolidate 4 Solr servers into 2 bigger/better > server - it gives more memory, and it cut down the replica the Leader needs > to manage. > > Also, I may look into write a script to monitor the tomcat log and if there > is OOM, kill tomcat, then restart it. A bit dirty, but may work for a short > term. > > I don't know too much about how documents indexed, and how to save memory > from that. Will probably work with a developer on this as well. > > Many Thanks guys. > > Cheers, > Tim > > -----Original Message----- > From: Shawn Heisey [mailto:apa...@elyograg.org] > Sent: Friday, 5 August 2016 4:55 PM > To: solr-user@lucene.apache.org > Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader > out of memory > > On 8/4/2016 8:14 PM, Tim Chen wrote: >> Couple of thoughts: 1, If Leader goes down, it should just go down, >> like dead down, so other servers can do the election and choose the >> new leader. This at least avoids bringing down the whole cluster. Am >> I right? > > Supplementing what Erick told you: > > When a typical Java program throws OutOfMemoryError, program behavior is > completely unpredictable. There are programming techniques that can be used > so that behavior IS predictable, but writing that code can be challenging. > > Solr 5.x and 6.x, when they are started on a UNIX/Linux system, use a Java > option to execute a script when OutOfMemoryError happens. This script kills > Solr completely. We are working on adding this capability when running on > Windows. > >> 2, Apparently we should not pushing too many documents to Solr, how >> do you guys handle this? Set a limit somewhere? > > There are exactly two ways to deal with OOME problems: Increase the heap or > reduce Solr's memory requirements. The number of documents you push to Solr > is unlikely to have a large effect on the amount of memory that Solr > requires. Here's some information on this topic: > > https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap > > Thanks, > Shawn > > > > [Premiere League Starts Saturday 13 August 9.30pm on > SBS]<http://theworldgame.sbs.com.au/> [Premiere League Starts Saturday 13 August 9.30pm on SBS]<http://theworldgame.sbs.com.au/>