Your autowarm counts are rather high, bit as Toke says this doesn't
seem outrageous.

I have seen situations where Solr is running close to the limits of
its heap and GC only reclaims a tiny bit of memory each time, when you
say "full GC with no memory
reclaimed" is that really no memory _at all_? Or "almost no memory"?
This situation can be alleviated by allocating more memory to the JVM
.

Your JVM pressure would certainly be reduced by enabling docValues on
any field you sort,facet or group on. That would require a full
reindex of course. Note that this makes your index on disk bigger, but
reduces JVM pressure by roughly the same amount so it's a win in this
situation.

Have you attached a memory profiler to the running Solr instance? I'd
be curious where the memory is being allocated.

Best,
Erick

On Fri, Dec 1, 2017 at 8:31 AM, Toke Eskildsen <t...@kb.dk> wrote:
> Dominique Bejean <dominique.bej...@eolya.fr> wrote:
>> We are encountering issue with GC.
>
>> Randomly nearly once a day there are consecutive full GC with no memory
>> reclaimed.
>
> [... 1.2M docs, Xmx 6GB ...]
>
>> Gceasy suggest to increase heap size, but I do not agree
>
> It does seem strange, with your apparently modest index & workload. Nothing 
> you say sounds problematic to me and you have covered the usual culprits 
> overlapping searchers, faceting and filterCache.
>
> Is it possible for you to share the solr.log around the two times that memory 
> usage peaked? 2017-11-30 17:00-19:00 and 2017-12-01 08:00-12:00.
>
> If you cannot share, please check if you have excessive traffic around that 
> time or if there is a lot of UnInverting going on (triggered by faceting on 
> non.DocValues String fields). I know your post implies that you have already 
> done so, so this is more of a sanity check.
>
>
> - Toke Eskildsen

Reply via email to