On 8/24/2015 12:48 AM, Pavel Hladik wrote:
> we have a Solr 5.2.1 with 9 cores and one of them has 140M docs. Can you
> please recommend tuning of those GC parameters? The performance is not a
> issue, sometimes during peaks we have OOM and we use 50G of heap memory, the
> server has 64G of ram.
>
> GC_TUNE="-XX:NewRatio=3 \
> -XX:SurvivorRatio=4 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+UseParNewGC \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:+CMSScavengeBeforeRemark \
> -XX:PretenureSizeThreshold=64m \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=50 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled \
> -XX:+ParallelRefProcEnabled"

If you are seeing OOM, chances are that different garbage collection
settings will make zero difference.  Tuning your GC doesn't cause Solr
to use less memory, it just makes reclaiming garbage more efficient.  To
fix your OOM, you will either need to increase your heap size or take
steps to reduce how much memory Solr needs.  50GB is a very large heap.

There are links in the table of contents on the following page regarding
features that use a lot of heap and ways to reduce heap usage.  It is
not comprehensive on these topics:

https://wiki.apache.org/solr/SolrPerformanceProblems

You mentioned that performance is not a concern, so the rest of this
email is probably TL;DR material:

I would think that only 14GB of RAM (your 64GB minus the 50 GB used by
Solr) to cache what is probably at least 150 million documents would
lead to performance problems.  Also, when you are seeing OOM, this means
that before the OOM happens, Java will be trying *really* hard, using
full GCs, to free up enough memory to work.  This can cause *extreme*
performance issues.

Your GC tuning list is unchanged from the list included with Solr
5.2.x.  I have a wiki page for GC tuning on Solr.  My parameter list for
CMS has some things enabled that the Solr start script doesn't, and it
has a few things that mine doesn't:

http://wiki.apache.org/solr/ShawnHeisey#CMS_.28ConcurrentMarkSweep.29_Collector

You might want to also try out the ParGCCardsPerStrideChunk option that
I mention after that parameter list.

If you're feeling really brave, you can try out my G1 parameters.  The
oracle guys say that the latest Java 8 versions (which according to a
later message on this thread, you are using) have excellent performance
with G1.

http://wiki.apache.org/solr/ShawnHeisey#G1_.28Garbage_First.29_Collector

Be sure you read the warning there about G1 from the Lucene guys.  I
personally have had no issues with G1.

Thanks,
Shawn

Reply via email to