we currently have arround 200gb in a server.
I'm aware of the RAM issue, but it somehow doesnt seems related.
I would expect search latency problems. not strange eofexceptions.

regarding the http.timeout - I didn't change anything concerning this.
Do I need to explicitly set something different than the solr out-of-the-box
comes with?

I'm also monitoring garbage collector metrics and I don't see anything
unsual..





Shawn Heisey-4 wrote
> On 3/16/2014 10:34 AM, adfel70 wrote:
>> I have a 12-node solr 4.6.1 cluster. each node has 2 solr procceses,
>> running
>> on 8gb heap jvms. each node has total of 64gb memory.
>> My current collection (7 shards, 3 replicas) has around 500 million docs. 
>> I'm performing bulk indexing into the collection. I set softCommit to 10
>> minutes and hardCommit openSearcher=false to 15 minutes.
> 
> How much index data does each server have on it?  This would be the sum
> total of the index directories of all your cores.
> 
>> I recently started seeing the following problems while indexing - every
>> 10
>> minutes ( and I assume that this is the 10minutes soft-commit cycles) I
>> get
>> the following errors:
>> 1. EofExcpetion from jetty in HttpOutput.write send from
>> SolrDispatchFilter
>> 2. queries to all cores start getting high latencies (more the 10
>> seconds)
> 
> EofException errors happen when your client disconnects before the
> request is complete.  I would strongly recommend that you *NOT*
> configure hard timeouts for your client connections, or that you make
> them really long, five minutes or so.  For SolrJ, this is the SO_TIMEOUT.
> 
> These problems sound like one of two things.  It could be either or both:
> 
> 1) You don't have enough RAM to cache your index effectively.  With 64GB
> of RAM and 16GB heap, you have approximately 48GB of RAM left over for
> other software and the OS disk cache.  If the total index size on each
> machine is in the neighborhood of 60GB (or larger), this might be a
> problem.  If you have software other than Solr running on the machine,
> you must subtract it's direct and indirect memory requirements from the
> available OS disk cache.
> 
> 2) Indexing results in a LOT of object creation, most of which exist for
> a relatively short time.  This can result in severe problems with
> garbage collection pauses.
> 
> Both problems listed above (and a few others) are discussed at the wiki
> page linked below.  As you will read, there are two major causes of GC
> symptoms - a heap that's too small and incorrect (or nonexistent) GC
> tuning.  With a very large index like yours, either or both of these GC
> symptoms could be happening.
> 
> http://wiki.apache.org/solr/SolrPerformanceProblemshttp://wiki.apache.org/solr/SolrPerformanceProblems
> 
> Side note: You should only be running one Solr process per machine.
> Running multiple processes creates additional memory overhead.  Any hard
> limits that you might have run into with a single Solr process can be
> overcome with configuration options for Jetty, Solr, or the operating
> system.
> 
> Thanks,
> Shawn





--
View this message in context: 
http://lucene.472066.n3.nabble.com/bulk-indexing-EofExceptions-and-big-latencies-after-soft-commit-tp4124574p4124783.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to