I've 50 million documents each about 10K in size and I've 4 index partitions
each consisting of 12.5 million documents. Each index partition is about
80GB. A search typically takes about 3-5 seconds. Single word searches are
faster than multi-word searches. I'm still working on finding the ideal
index size that Solr can handle well with in a second.

Thanks,
Venkatesh

On 3/27/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:

I know there are a bunch of variables here (RAM, number of fields, hits,
etc.), but I am trying to get a sense of how big of an index in terms of
number of documents Solr can reasonable handle. I have heard indexes of 3-4
million documents running fine. But, I have no idea what a reasonable upper
limit might be.

I have a large number of documents and about 200-300 customers would have
access to varying subsets of those documents. So, one possible strategy is
to have everything in a large index, but duplicate the documents for each
customer that has access to that document. But, that would really make the
total number of documents huge. So, I am trying to get a sense of how big is
too big. Each document will probably have about 30 fields. Most of them will
be strings, but there will be some text, ints,a nd floats.

An extension to this strategy is to segment the customers among various
instances of Solr.


Reply via email to