On 5/18/07, Tom Hill <[EMAIL PROTECTED]> wrote:
Hi -
What happens if updates occur during the optimize?
It blocks.
There's been some work on the Lucene side to buffer up to maxBufferedDocs
while merges are going on in the background. If optimization takes an
hour on a really
On 5/18/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
Once in a blue moon, the addition of a single document could possibly
cause cascading merges, essentially the same as an optimize. One way
to avoid this is to set a large mergeFactor... the downside being that
you get more segments and have to
On May 18, 2007, at 2:10 PM, Yonik Seeley wrote:
What's your max heap set to? Might just want to verify that not too
much time is spent in GC, which can happen when you are right at the
brink.
Ah.. I thought it was set to 1GB but in my upgrade to java 1.6 I
guess I'm now just giving it the
Hi -
What happens if updates occur during the optimize?
Thanks,
Tom
On 5/18/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
I have a largish solr store (2.4m documents with lots of stored text,
27GB data dir) and I ran optimize on it last night. The QTime was
3605096 ! (The commit took about a minute.) During the optimize the
solr java process had 50% CPU and was u
I have a largish solr store (2.4m documents with lots of stored text,
27GB data dir) and I ran optimize on it last night. The QTime was
3605096 ! (The commit took about a minute.) During the optimize the
solr java process had 50% CPU and was using all of its max heap size.
(1GB) On a serve