On Sun, 15 Jun 2008 14:38:15 +0200
"Roberto Nieto" <[EMAIL PROTECTED]> wrote:

> Hi Otis,
> 
> Thanks a lot for your interest.
> 
> The main thing i cant understand very well is that if I have 8 maquines that
> will be searchers, for example, why they will have a higher cost of hw if I
> have one big index. If I have 10 smaller indexes I will need
> to search over all of them so...that won´t requiere the same hw? I
> understand that if i can search in a subset of the index it would be better
> to split the index but if i must search in the entire index?
> 
> I can add new searcher maquines so i think that my hw problem is the ram,
> its that right?
> 
> Probably i'm missing something, sorry if my question have an obvious answer.
>

hola Roberto,
I may be wrong, but if you have 1 x 300 GB, you are more likely to hit a
hardware limit (RAM / CPU, IO is not something that can be changed in your
case) sooner than if you only deal with smaller sections  ( 10 x 30 GB ) of the
index at a time. 
When searching, you search across all the parts ( known as shards) and then
generate 1 set of results out of sum of results.

http://wiki.apache.org/solr/DistributedSearch

Espero que sirva para aclarar,
B
_________________________
{Beto|Norberto|Numard} Meijome

"The more I see the less I know for sure." 
  John Lennon

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.

Reply via email to