Yeah, 512M is the default if for Java, but Solr _really_ likes memory.

These two lines are "smoking guns"
Max heap after conc GC: 488.7M (99.6%)
Max heap after full GC: 490M (99.9%)

So what's happening (I think) is that you're spending a lot of cycles
recovering a very little bit of memory and then (probably) going back
into another GC cycle. Increasing memory will help here a lot.

And it doesn't really matter that "some cores are largely inactive". Once
an object is allocated on the heap (say a filterCache entry, underlying
caches etc) it stays there until there are no references to it, i.e. usually
when the core closes.

Best,
Erick

On Tue, Feb 14, 2017 at 8:48 AM, Leon STRINGER
<leon.strin...@ntlworld.com> wrote:
>>
>>     On 14 February 2017 at 15:49 Walter Underwood <wun...@wunderwood.org>
>> wrote:
>>
>>
>>     Yes, 512 MB is far too small. I’m surprised it even starts. We run with 8
>> Gb.
>>
>
> Thanks, in fairness 512 MB was the default and we're new to this. We'll look 
> at
> what we're allocating to Solr to tune this.
>
>>
>>     wunder
>>     Walter Underwood
>>     wun...@wunderwood.org
>>     http://observer.wunderwood.org/ (my blog)
>>

Reply via email to