Thanks.
Turns out the problem was related to throughput - I wasn't getting
enough docs indexed per second and an internal queue in a vendor library
was growing without bound.
Using the StreamingUpdateSolrServer fixed that.

-Nick

-----Original Message-----
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com] 
Sent: 19 January 2010 12:04
To: solr-user@lucene.apache.org
Subject: Re: Interesting OutOfMemoryError on a 170M index

On Thu, Jan 14, 2010 at 4:04 AM, Minutello, Nick <
nick.minute...@credit-suisse.com> wrote:

> Agreed, commit every second.
>
> Assuming I understand what you're saying correctly:
> There shouldn't be any index readers - as at this point, just writing 
> to the index.
> Did I understand correctly what you meant?
>
>
Solr opens a new IndexSearcher after a commit whether or not you are
querying it. So if you are committing every second, you are going to
have a number of IndexSearchers trying to warm themselves. That can
cause an OutOfMemoryException. Just indexing documents with a reasonable
heap size will not cause the JVM to go out of memory.

--
Regards,
Shalin Shekhar Mangar.

=============================================================================== 
 Please access the attached hyperlink for an important electronic 
communications disclaimer: 
 http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
 
=============================================================================== 
 

Reply via email to