On Tue, Jan 25, 2011 at 4:19 PM, Em <mailformailingli...@yahoo.de> wrote:
> > Hi Martin, > > are you sure that your GC is well tuned? > This are the heap related jvm configurations for the servers running with 17GB heap size (one with parallel collector, one with CMS): -XX:+HeapDumpOnOutOfMemoryError -server -Xmx17G -XX:MaxPermSize=256m -XX:NewSize=2G -XX:MaxNewSize=2G -XX:SurvivorRatio=6 -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError -server -Xmx17G -XX:MaxPermSize=256m -XX:NewSize=2G -XX:MaxNewSize=2G -XX:SurvivorRatio=6 -XX:+UseParallelOldGC -XX:+UseParallelGC Another heap configuration is running with 8GB max heap, and this search server also has lower peaks in response times. To me it seems that it's just too much memory that gets allocated/collected/compacted. I'm just checking out how far we can reduce cache sizes (and the max heap) without any reduction of response times (and disk I/O). Right now it seems that a reduction of the documentCache size indeed does reduce the hitratio of the cache, but it does not have any negative impact on response times (neither is I/O increased). Therefore I'd follow the path of reducing the cache sizes as far as we can as long as there are no negative impacts and then I'd check again the longest requests and see if they're still caused by full GC cycles. Even then they should be much shorter due to the reduced memory that is collected/compacted. So now I also think, the terracotta bigmemory is not the right solution :-) Cheers, Martin > A request that needs more than a minute isn't the standard, even when I > consider all the other postings about response-performance... > > Regards > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Use-terracotta-bigmemory-for-solr-caches-tp2328257p2330652.html > Sent from the Solr - User mailing list archive at Nabble.com. > -- Martin Grotzke http://www.javakaffee.de/blog/