On 7/31/2013 4:27 AM, Sinduja Rajendran wrote:
> I am running solr 4.0 in a cloud. We have close to 100Mdocuments. The data
> is from a single DB table. I use dih.
> Our solrCloud has 3 zookeepers, one tomcat, 2 solr instances in same
> tomcat. We have 8 GB Ram.
> 
> After indexing 14M, my indexing fails witht the below exception.
> 
> solr org.apache.lucene.index.MergePolicy$MergeException:
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> 
> I tried increasing the GC value to the App server
> 
>  -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80
> 
> But after giving the command, my indexing went drastically down. Its
> was indexing only 15k documents for 20 minutes. Earlier it was 300k
> for 20 min.

First thing to mention is that Solr 4.0 was extremely buggy, upgrading
would be advisable.  In the meantime:

An OutOfMemoryError means that Solr needs more heap memory than the JVM
is allowed to use.  The Solr Admin UI dashboard will tell you how much
memory is allocated to your JVM, which you can increase with the -Xmx
parameter.  Real RAM must be available from the system in order to
increase the heap size.

The options you have given just change the GC collector and tune one
aspect of the new collector, they don't increase anything.  Here are
some things that may help you:

http://wiki.apache.org/solr/SolrPerformanceProblems
http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning

After looking over that information and making adjustments, if you are
still having trouble, we can go over your config and all your details to
see what can be done.

You said that both of your Solr instances are running in the same
tomcat.  Just FYI - because you aren't running all functions on separate
hardware, your setup is not fault tolerant.  Machine failures DO happen,
no matter how much redundancy you build into that server.  If you are
running all this on a redundant VM solution that has live migration of
running VMs, then my statement isn't accurate.

Thanks,
Shawn

Reply via email to