Hi,
Class <http://localhost:7000/histo/class> Instance Count <http://localhost:7000/histo/count> Total Size <http://localhost:7000/histo/size> class [C <http://localhost:7000/class/0x73ae500c0> 3410607 699552656 class [Lorg.apache.lucene.util.fst.FST$Arc; <http://localhost:7000/class/0x73b6ff2c0> 319813 332605520 class [Ljava.lang.Object; <http://localhost:7000/class/0x73ae96900> 898433 170462152 class [Ljava.util.HashMap$Entry; <http://localhost:7000/class/0x73b60b0e8> 856551 149091216 class java.util.HashMap$Entry <http://localhost:7000/class/0x73b14d060> 2802560 100892160 class java.lang.String <http://localhost:7000/class/0x73ae91a80> 3295405 52726480 class java.util.HashMap <http://localhost:7000/class/0x73b3f2270> 750750 42042000 class org.apache.lucene.util.fst.FST <http://localhost:7000/class/0x73aee36d8> 319896 39027312 class org.apache.lucene.index.FieldInfo <http://localhost:7000/class/0x73afd0af0> 748801 35942448 class [B <http://localhost:7000/class/0x73ae96890> 1032862 28932353 class java.util.LinkedHashMap$Entry <http://localhost:7000/class/0x73b545a08> 516680 26867360 class org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader <http://localhost:7000/class/0x73afd0fa0> 319831 24307156 class java.util.Collections$UnmodifiableMap <http://localhost:7000/class/0x73b016400> 749522 23984704 class java.util.HashMap$FrontCache <http://localhost:7000/class/0x73afda618> 863640 17272800 class org.apache.lucene.util.BytesRef <http://localhost:7000/class/0x73b09d2a8> 709550 11352800 class java.util.LinkedHashMap <http://localhost:7000/class/0x73afd8908> 109810 7137650 class [I <http://localhost:7000/class/0x73ae50130> 320987 5268068 class org.apache.lucene.analysis.core.WhitespaceTokenizer <http://localhost:7000/class/0x73b3b3b90> 59540 5239520 class java.lang.Integer <http://localhost:7000/class/0x73ae0e718> 1149625 4598500 class org.apache.lucene.util.AttributeSource$State <http://localhost:7000/class/0x73b3b6440> 169168 2706688 class java.util.TreeMap$Node <http://localhost:7000/class/0x73ae4c290> 19876 1371444 class [Lorg.apache.lucene.util.AttributeSource$State; <http://localhost:7000/class/0x73b60a978> 56394 1353456 class org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl <http://localhost:7000/class/0x73afcd418> 56394 1127880 class org.apache.lucene.analysis.util.CharacterUtils$CharacterBuffer <http://localhost:7000/class/0x73b3b6600> 52888 951984 class org.apache.lucene.analysis.Analyzer$TokenStreamComponents <http://localhost:7000/class/0x73b3b6670> 56404 902464 class java.lang.Class <http://localhost:7000/class/0x73ae162f0> 5706 867312 class org.apache.lucene.util.fst.FST$Arc <http://localhost:7000/class/0x73b3b6a60> 14017 686833 class java.util.HashMap$Values <http://localhost:7000/class/0x73b605ce8> 56409 451272 class org.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl <http://localhost:7000/class/0x73b3823b8> 56380 451040 class org.apache.lucene.analysis.tokenattributes.PositionIncrementAttributeImpl <http://localhost:7000/class/0x73b382498> 56396 225584 class [Ljava.lang.Integer; <http://localhost:7000/class/0x73ae502f0> 1 161048 class java.util.concurrent.ConcurrentHashMap$HashEntry <http://localhost:7000/class/0x73ae8db40> 3456 96768 class [[C <http://localhost:7000/class/0x73b60cf18> This sample of the heap dump I took when we encountered the problem previously.. -----Original Message----- From: Jack Krupansky [mailto:j...@basetechnology.com] Sent: 28 November 2012 16:23 To: solr-user@lucene.apache.org Subject: Re: Permanently Full Old Generation... Have you done a Java heap dump to see what the most common objects are? -- Jack Krupansky From: Annette Newton Sent: Wednesday, November 28, 2012 11:06 AM To: <mailto:solr-user@lucene.apache.org> solr-user@lucene.apache.org Cc: Andy Kershaw Subject: Permanently Full Old Generation... Hi, I’m hoping someone can help me with an issue we are encountering with solr cloud.. We are seeing strange gc behaviour after running solr cloud under quite heavy insert load for a period of time. The old generation becomes full and no amount of garbage collection will free up the memory. I have attached a memory profile, as you can see it gets progressively worse as the day goes on to the point where we are always doing full garbage collections all the time. The only way I have found to resolve this issue is to reload the core, then subsequent garbage collections reclaim the used space, that’s what happened at 3pm on the Memory profile. All the nodes eventually display the same behaviour. We have multiple threads running adding batches of upto 100 documents at a time. I have also attached our Schema and Config. We are running 4 shards each with a single replica, have a 3 node zookeeper setup and the 8 solr boxes instances are aws High-Memory Double Extra Large with 34.2 GB Memory, 4 Virtual cores. Thanks in advance Annette Newton