On 4/10/2013 9:48 AM, Marc Des Garets wrote:
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1 collector.
The perm gen went from 410Mb to 600Mb.

The eden space usage is a lot bigger and the survivor space usage is
100% all the time.

I don't really understand what is happening. GC behavior really doesn't
seem right.

My jvm settings:
-d64 -server -Xms40g -Xmx40g -XX:+UseG1GC -XX:NewRatio=1
-XX:SurvivorRatio=3 -XX:PermSize=728m -XX:MaxPermSize=728m
As Otis has already asked, why do you have a 40GB heap?  The only way I 
can imagine that you would actually NEED a heap that big is if your 
index size is measured in hundreds of gigabytes.  If you really do need 
a heap that big, you will probably need to go with a JVM like Zing.  I 
don't know how much Zing costs, but they claim to be able to make any 
heap size perform well under any load.  It is Linux-only.
I was running into extreme problems with GC pauses with my own setup, 
and that was only with an 8GB heap.  I was using the CMS collector and 
NewRatio=1.  Switching to G1 didn't help at all - it might have even 
made the problem worse.  I never did try the Zing JVM.
After a lot of experimentation (which I will admit was not done very 
methodically) I found JVM options that have reduced the GC pause problem 
greatly.  Below is what I am using now on Solr 4.2.1 with a total 
per-server index size of about 45GB.  This works properly on CentOS 6 
with Oracle Java 7u17, UseLargePages may require special kernel tuning 
on other operating systems:
-Xmx6144M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 
-XX:NewRatio=3 -XX:MaxTenuringThreshold=8 -XX:+CMSParallelRemarkEnabled 
-XX:+ParallelRefProcEnabled -XX:+UseLargePages -XX:+AggressiveOpts
These options could probably use further tuning, but I haven't had time 
for the kind of testing that will be required.
If you decide to pay someone to make the problem going away instead:

http://www.azulsystems.com/products/zing/whatisit

Thanks,
Shawn

Reply via email to