Solr 1.2 ignores the 'number of documents' attribute. It honors the
"every 30 minutes" attribute.

Lance 

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Sunday, June 01, 2008 6:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr indexing configuration help

On Sun, Jun 1, 2008 at 4:43 AM, Gaku Mak <[EMAIL PROTECTED]> wrote:
> I have tried Yonik's suggestions with the following:
> 1) all autowarming are off
> 2) commented out firstsearch and newsearcher event handlers
> 3) increased autocommit interval to 600 docs and 30 minutes 
> (previously 50 docs and 5 minutes)

Glad it looks like your memory issues are solved, but I really wouldn't
use "docs" at all for an autocommit criteria.... it will just slow down
your full index builds.

-Yonik

> In addition, I updated the java option with the following:
> -d64 -server -Xms2048M -Xmx3072M -XX:-HeapDumpOnOutOfMemoryError 
> -XX:+UseSerialGC
>
> Results:
> I'm currently at 100,000 documents now with about 9.0GB index on a 
> quad machine with 4GB ram.  The stress test is to add 20 documents 
> every 30 seconds now.
>
> It seems like the serial GC works better than the other two 
> alternatives (-XX:+UseParallelGC or -XX:+UseConcMarkSweepGC) for some 
> reason.  I have not seen any OOM since the changes mentioned above 
> (yet).  If others have better experience with other GC and know how to

> configure it properly, please let me know because using serial GC just
doesn't sound right on a quad machine.
>
> Additional questions:
> Does anyone know how solr/lucene use heap in terms of their 
> generations (young vs tenured) on the indexing environment?  If we 
> have this answer, we would be able to better configure the 
> young/tenured ratio in the heap.  Any help is appreciated!  Thanks!
>
> Now, I'm looking into configuring the slave machines.  Well, that's a 
> separate question.
>
>
>
> Yonik Seeley wrote:
>>
>> Some things to try:
>> - turn off autowarming on the master
>> - turn off autocommit, unless you really need it, or change it to be 
>> less agressive:  autocommitting every 50 docs is bad if you are 
>> rapidly adding documents.
>> - set maxWarmingSearchers to 1 to prevent the buildup of searchers
>>
>> -Yonik
>>
>> On Fri, May 30, 2008 at 3:39 PM, Gaku Mak <[EMAIL PROTECTED]> wrote:
>>>
>>> I started running the test on 2 other machines with similar specs 
>>> but more RAM (4G). One of them now has about 60k docs and still 
>>> running fine. On the other machine, solr died at about 43k docs. A 
>>> short while before solr died, I saw that there were 5 searchers at 
>>> the same time. Do any of you know why would solr create 5 searchers,

>>> and if that could cause solr to die? Is there any way to prevent 
>>> this? Also is there a way to totally disable the searcher and 
>>> whether that is a way to optimize the solr master?
>>>
>>> I copied the following from the SOLR Statistics page in case it has 
>>> interested info:
>>>
>>> name:    [EMAIL PROTECTED] main
>>> class:  org.apache.solr.search.SolrIndexSearcher
>>> version:        1.0
>>> description:    index searcher
>>> stats:  caching : true
>>> numDocs : 42754
>>> maxDoc : 42754
>>> readerImpl : MultiSegmentReader
>>> readerDir :
>>> org.apache.lucene.store.FSDirectory@/var/lib/solr/peoplesolr_0002/so
>>> lr/data/index
>>> indexVersion : 1211702500453
>>> openedAt : Fri May 30 10:04:15 PDT 2008 registeredAt : Fri May 30 
>>> 10:05:05 PDT 2008
>>>
>>> name:   [EMAIL PROTECTED] main
>>> class:  org.apache.solr.search.SolrIndexSearcher
>>> version:        1.0
>>> description:    index searcher
>>> stats:  caching : true
>>> numDocs : 42754
>>> maxDoc : 42754
>>> readerImpl : MultiSegmentReader
>>> readerDir :
>>> org.apache.lucene.store.FSDirectory@/var/lib/solr/peoplesolr_0002/so
>>> lr/data/index
>>> indexVersion : 1211702500453
>>> openedAt : Fri May 30 10:03:24 PDT 2008 registeredAt : Fri May 30 
>>> 10:03:41 PDT 2008
>>>
>>> name:   [EMAIL PROTECTED] main
>>> class:  org.apache.solr.search.SolrIndexSearcher
>>> version:        1.0
>>> description:    index searcher
>>> stats:  caching : true
>>> numDocs : 42675
>>> maxDoc : 42675
>>> readerImpl : MultiSegmentReader
>>> readerDir :
>>> org.apache.lucene.store.FSDirectory@/var/lib/solr/peoplesolr_0002/so
>>> lr/data/index
>>> indexVersion : 1211702500450
>>> openedAt : Fri May 30 10:00:53 PDT 2008 registeredAt : Fri May 30 
>>> 10:01:05 PDT 2008
>>>
>>> name:   [EMAIL PROTECTED] main
>>> class:  org.apache.solr.search.SolrIndexSearcher
>>> version:        1.0
>>> description:    index searcher
>>> stats:  caching : true
>>> numDocs : 42697
>>> maxDoc : 42697
>>> readerImpl : MultiSegmentReader
>>> readerDir :
>>> org.apache.lucene.store.FSDirectory@/var/lib/solr/peoplesolr_0002/so
>>> lr/data/index
>>> indexVersion : 1211702500451
>>> openedAt : Fri May 30 10:02:20 PDT 2008 registeredAt : Fri May 30 
>>> 10:02:22 PDT 2008
>>>
>>> name:   [EMAIL PROTECTED] main
>>> class:  org.apache.solr.search.SolrIndexSearcher
>>> version:        1.0
>>> description:    index searcher
>>> stats:  caching : true
>>> numDocs : 42724
>>> maxDoc : 42724
>>> readerImpl : MultiSegmentReader
>>> readerDir :
>>> org.apache.lucene.store.FSDirectory@/var/lib/solr/peoplesolr_0002/so
>>> lr/data/index
>>> indexVersion : 1211702500452
>>> openedAt : Fri May 30 10:02:55 PDT 2008 registeredAt : Fri May 30 
>>> 10:02:57 PDT 2008
>>>
>>> Thank you all so much for your help. I really appreciate it.
>>>
>>> -Gaku
>>>
>>> Yonik Seeley wrote:
>>>>
>>>> It's most likely a
>>>> 1) hardware issue: bad memory
>>>>  OR
>>>> 2) incompatible libraries (most likely libc version for the JVM).
>>>>
>>>> If you have another box around, try that.
>>>>
>>>> -Yonik
>>>>
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Solr-indexing-configuration-help-tp17524364p17
>>> 566612.html Sent from the Solr - User mailing list archive at 
>>> Nabble.com.
>>>
>>>
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/Solr-indexing-configuration-help-tp17524364p1758
> 3518.html Sent from the Solr - User mailing list archive at 
> Nabble.com.
>
>

Reply via email to