Wow, really, it's that easy?  I could swear there's a wiki page somewhere that 
suggests otherwise, but I believe Yonik today over a wiki page last edited 
wherever.  

But this should be well-publisized, it's a pretty easy solution that will at 
least give you "as up to date as your Solr can handle", to a problem that many 
people seem to be having.  I would suggest a maxWarmingSearchers 1 example 
should at least be included commented out in the example solrconfig.xml, if not 
even included live. 

(This would be even better if, on a commit failing due to maxWarmingSearchers, 
Solr would automatically commit them when the warming is complete -- instead of 
relying on another commit manually being made at some future point.  Is there 
any built-in hook for 'warming complete' or 'index fully ready' that could be 
used to jury-rig this?)

Yonik, how will maxWarmingSearchers in this scenario effect replication?  If a 
slave is pulling down new indexes so quickly that the warming searchers would 
ordinarily pile up, but maxWarmingSearchers is set to 1.... what happens?

________________________________________
From: ysee...@gmail.com [ysee...@gmail.com] On Behalf Of Yonik Seeley 
[yo...@lucidimagination.com]
Sent: Monday, December 13, 2010 9:07 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap 
getting collected?

On Mon, Dec 13, 2010 at 8:47 PM, John Russell <jjruss...@gmail.com> wrote:
> Wow, you read my mind.  We are committing very frequently.  We are trying to
> get as close to realtime access to the stuff we put in as possible.  Our
> current commit time is... ahem.... every 4 seconds.
>
> Is that insane?

Not necessarily insane, but challenging ;-)
I'd start by setting maxWarmingSearchers to 1 in solrconfig.xml.  When
that is exceeded, a commit will fail (this just means a new searcher
won't be opened on that commit... the docs will be visible with the
next commit that does succeed.)

-Yonik
http://www.lucidimagination.com

Reply via email to