I am running eight cores, each core serves up different types of
searches so there is no overlap in their function.  Some cores have
millions of documents.  My search times are quite fast.  I don't see any
real slowdown from multiple cores, but you just have to have enough
memory for them. Memory simply has to be big enough to hold what you are
loading.  Try it out, but make sure that the functionality you are
actually looking for isn't sharding instead of multiple cores...  

http://wiki.apache.org/solr/DistributedSearch


-----Original Message-----
From: Yury Kats [mailto:yuryk...@yahoo.com] 
Sent: Thursday, December 15, 2011 10:31 AM
To: solr-user@lucene.apache.org
Subject: Re: Core overhead

On 12/15/2011 1:07 PM, Robert Stewart wrote:

> I think overall memory usage would be close to the same.

Is this really so? I suspect that the consumed memory is in direct
proportion to the number of terms in the index. I also suspect that
if I divided 1 core with N terms into 10 smaller cores, each smaller
core would have much more than N/10 terms. Let's say I'm indexing
English texts, it's likely that all smaller cores would have almost
the same number of terms, close to the original N. Not so?

Reply via email to