Solr doesn’t manage this at all, it’s the JVM’s garbage collection
that occasionally kicks in. In general, memory creeps up until
the GC threshold is set (which there are about a zillion
parameters that you can set) and then GC kicks in.

Generally, the recommendation is to use the G1GC collector
and just leave the default settings as they are.

It’s usually a mistake, BTW, to over-allocate memory. You should shrink the
heap as far as you can and still maintain a reasonable safety margin. See:

https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

What’s a “reasonable safety margin”? Unfortunately you have to experiment.

Best,
Erick

> On Oct 12, 2020, at 10:33 AM, Ryan W <rya...@gmail.com> wrote:
> 
> Hi all,
> 
> What is the meaning of the "memory" line in the output when I run the solr
> status command?  What controls whether that memory gets exhausted?  At
> times if I run "solr status" over and over, that memory number creeps up
> and up and up.  Presumably it is not a good thing if it moves all the way
> up to my 31GB capacity.  What controls whether that happens?  How do I
> prevent that?  Or does Solr manage this automatically?
> 
> 
> $ /opt/solr/bin/solr status
> 
> Found 1 Solr nodes:
> 
> Solr process 101530 running on port 8983
> {
>  "solr_home":"/opt/solr/server/solr",
>  "version":"7.7.2 d4c30fc2856154f2c1fefc589eb7cd070a415b94 - janhoy -
> 2019-05-28 23:37:48",
>  "startTime":"2020-10-12T12:04:57.379Z",
>  "uptime":"0 days, 1 hours, 46 minutes, 41 seconds",
>  "memory":"3.3 GB (%10.7) of 31 GB"}

Reply via email to