On Tue, 29 Apr 2008 10:10:09 +0200
"Nico Heid" <[EMAIL PROTECTED]> wrote:

> So now the Question:
> Is there a way to split a too big index into smaller ones? Do I have to
> create more instances at the beginning, so that I will not run out of power
> and space? (which will ad quite a bit of redundance of data)
> Lets say I miscalculated and used only 2 indices, but now I see I need at
> least 4.

Hi Nico,
being able to split the index without having to reindex the lot would be a
nice option :)

One approach we use in a project I am working on is to split up the full extent
of your domain (user IDs) in equal parts from the start - with this we get n
clusters and it is as much as we will need to grow outwards . Then we grow
each cluster in depth as needed.

It obviously helps if you have an equal (or random) distribution across your
clusters (we do). Given that you probably won't know how many users you'll get
your case is different to ours. 

To even out your distribution of user-ids to cluster you can use a function of
the user-id (ie, md5(user_id) ) instead of user_id itself. 

HIH,
B
_________________________
{Beto|Norberto|Numard} Meijome

Percusive Maintenance - The art of tuning or repairing equipment by hitting it.

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.

Reply via email to