Yup, known stuff, on TODO. Otis -- Solr & ElasticSearch Support -- http://sematext.com/ Performance Monitoring -- http://sematext.com/spm
On Mon, Jun 24, 2013 at 12:50 PM, Walter Underwood <wun...@wunderwood.org> wrote: > Interesting. It seems to spend more time in GC, but the major GCs aren't any > faster. They are more consistent. > > I notice that SPM shows average collection time. This is not a particularly > useful number. It should use median and percentiles. > > For a one-sided distribution, never use mean (average), always use median and > percentiles. > > This is one of my basic tests for monitoring products. If they show averages > for response time (or other durations), they are doing it wrong and need to > learn more about statistics. > > wunder > > On Jun 24, 2013, at 8:59 AM, Otis Gospodnetic wrote: > >> And here is our most recent experience with G1, although not with >> Solr, but with HBase: >> >> http://blog.sematext.com/2013/06/24/g1-cms-java-garbage-collector/ >> >> Otis >> -- >> Solr & ElasticSearch Support -- http://sematext.com/ >> Performance Monitoring -- http://sematext.com/spm >> >> >> >> On Fri, Jun 21, 2013 at 11:33 AM, Walter Underwood >> <wun...@wunderwood.org> wrote: >>> On 6/20/2013 10:22 PM, William Bell wrote: >>>> It would be good to see some CMS configs too... Can you send your java >>>> params? >>> >>> Here is what we use in production. We run multiple collections with small >>> documents. One is 3M docs, one is 9M, one is 2M, and the other three are >>> small. We use Amazon m1.xlarge instances (4CPU, 15GB). >>> >>> These options were developed with our load test. That is based on a full >>> day of queries. We use JMeter to send queries at a constant rate that takes >>> the CPU to between 50% and 75% busy. We measure 95th and 99th percentiles >>> for response time. >>> >>> We enable ExplicitGCInvokesConcurrent because some monitoring software was >>> calling System.gc() to get accurate memory numbers. That was causing >>> notable pauses in service and messing up our 99th percentile. >>> Alternatively, you could disable those entirely with the flag >>> DisableExplicitGC. >>> >>> The new size is large so that all the allocations needed to handle a single >>> request can fit in new space. We really do not want per-request data being >>> allocated in tenured space. New needs to be big enough to handle multiple >>> simultaneous requests. >>> >>> export CATALINA_OPTS="$CATALINA_OPTS -d64" >>> export CATALINA_OPTS="$CATALINA_OPTS -server" >>> export CATALINA_OPTS="$CATALINA_OPTS -Xms8g" >>> export CATALINA_OPTS="$CATALINA_OPTS -Xmx8g" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:NewSize=2048m" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:MaxPermSize=256m" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+UseConcMarkSweepGC" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+UseParNewGC" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+ExplicitGCInvokesConcurrent" >>> export CATALINA_OPTS="$CATALINA_OPTS -verbose:gc" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCTimeStamps" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:-TraceClassUnloading" >>> export CATALINA_OPTS="$CATALINA_OPTS -Xloggc:$CATALINA_HOME/logs/gc.log" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:+HeapDumpOnOutOfMemoryError" >>> export CATALINA_OPTS="$CATALINA_OPTS -XX:HeapDumpPath=$CATALINA_HOME/logs/" >>> >>> We used to include these options, but they default to enabled in Java 1.7 >>> Update 17. >>> >>> -XX:+DoEscapeAnalysis >>> -XX:+CMSParallelRemarkEnabled >>> -XX:+UseCompressedOops >>> >>> wunder >>> >>> > > -- > Walter Underwood > wun...@wunderwood.org > > >