-
> From:Walter Underwood
> Sent: Tuesday 25th July 2017 22:39
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
>
> I’ve never been fond of elaborate GC settings. I prefer to set a few things
> then let it run. I know someone wh
to spare. Your max heap is over a 100 times larger than ours,
> your index just around 16 times. It should work with less.
> >
> > As a bonus, with a smaller heap, you can have much more index data in
> mapped memory.
> >
> > Regards,
> > Markus
> >
> >
an have much more index data in mapped
> memory.
>
> Regards,
> Markus
>
> -Original message-
>> From:David Hastings
>> Sent: Tuesday 25th July 2017 22:15
>> To: solr-user@lucene.apache.org
>> Subject: Re: Optimize stalls at the same point
>>
>>
y 2017 22:15
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
>
> it turned out that i think it was a large GC operation, as it has since
> resumed optimizing. current java options are as follows for the indexing
> server (they are different fo
it turned out that i think it was a large GC operation, as it has since
resumed optimizing. current java options are as follows for the indexing
server (they are different for the search servers) if you have any
suggestions as to changes I am more than happy to hear them, honestly they
have just b
Are you sure you need a 100GB heap? The stall could be a major GC.
We run with an 8GB heap. We also run with Xmx equal to Xms, growing memory to
the max was really time-consuming after startup.
What version of Java? What GC options?
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.
I am trying to optimize a rather large index (417gb) because its sitting at
28% deletions. However when optimizing, it stops at exactly 492.24 GB
every time. When I restart solr it will fall back down to 417 gb, and
again, if i send an optimize command, the exact same 492.24 GB and it stops
optim