The case is actually anytime you need to add another shard.  With the
current implementation if you need to add a new shard the current
hashing approach breaks down.  Even with many small shards I think you
still have this issue when you're adding/updating/deleting docs.  I'm
definitely interested in hearing other approaches that would work
though if there are any.

On Sat, Jan 28, 2012 at 7:53 PM, Lance Norskog <goks...@gmail.com> wrote:
> If this is to do load balancing, the usual solution is to use many
> small shards, so you can just move one or two without doing any
> surgery on indexes.
>
> On Sat, Jan 28, 2012 at 2:46 PM, Yonik Seeley
> <yo...@lucidimagination.com> wrote:
>> On Sat, Jan 28, 2012 at 3:45 PM, Jamie Johnson <jej2...@gmail.com> wrote:
>>> Second question, I know there are discussion about storing the shard
>>> assignments in ZK (i.e. shard 1 is responsible for hashed values
>>> between 0 and 10, shard 2 is responsible for hashed values between 11
>>> and 20, etc), this isn't done yet right?  So currently the hashing is
>>> based on the number of shards instead of having the assignments being
>>> calculated the first time you start the cluster (i.e. based on
>>> numShards) so it could be adjusted later, right?
>>
>> Right.  Storing the hash range for each shard/node is something we'll
>> need to dynamically change the number of shards (as opposed to
>> replicas), so we'll need to start doing it sooner or later.
>>
>> -Yonik
>> http://www.lucidimagination.com
>
>
>
> --
> Lance Norskog
> goks...@gmail.com

Reply via email to