Thanks again for the info. Hopefully we find some more clues if it
continues to occur. The ops team are looking at alternative deployment
methods as well, so we might end up avoiding the issue altogether.
Ta,
Greg
On 28 February 2014 02:42, Shalin Shekhar Mangar wrote:
> I think it is just a si
I think it is just a side-effect of the current implementation that
the ranges are assigned linearly. You can also verify this by choosing
a document from each shard and running it's uniqueKey against the
CompositeIdRouter's sliceHash method and verifying that it is included
in the range.
I couldn
Thanks Shalin, that code might be helpful... do you know if there is a
reliable way to line up the ranges with the shard numbers? When the problem
occurred we had 80 million documents already in the index, and could not
issue even a basic 'deleteById' call. I'm tempted to assume they are just
assig
If you have 15 shards and assuming that you've never used shard
splitting, you can calculate the shard ranges by using new
CompositeIdRouter().partitionRange(15, new
CompositeIdRouter().fullRange())
This gives me:
[8000-9110, 9111-a221, a222-b332,
b333-c443, c444000