So the thinking here was to divide the total indexed data among N partitions
since the amount of data will be massive.  Each partition would probably be
using a separate physical disk(s), and then for searching I could use
ParallelMultiSearcher to dispatch searches to each of these partitions as a
separate Searchable.  I know that the Lucene doc mentioned that there is
really not much gain in using ParallelMultiSearcher versus MultiSearcher
(sequential of a bunch of searchables) when using it against a single disk,
so if we had separate physical disks, the parallel version might be of more
tangible benefit.

-John

On 4/27/06, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
>
> : Suppose I want the xml input submitted to solr to be distributed among a
> : fixed set of partitions; basically, something like round-robin among
> each of
> : them, so that each directory has a relatively equal size in terms of #
> of
> : segments.  Is there an easy way to do this?  I took a quick look at the
> solr
>
> I'm not sure if i'm understanding your question:  What would the
> motivation be for doing something like this? ... what would the usage be
> like from a search perspective one you had built up these directories?
>
>
> -Hoss
>
>

Reply via email to