On Thu, Mar 1, 2012 at 12:27 AM, Jamie Johnson wrote:
> Is there a ticket around doing this?
Around splitting shards?
The easiest thing to consider is just splitting a single shard in two
reusing some of the existing buffering/replication mechanisms we have.
1) create two new shards to represent
Mark,
Is there a ticket around doing this? If the work/design was written
down somewhere the community might have a better idea of how exactly
we could help.
On Wed, Feb 29, 2012 at 11:21 PM, Mark Miller wrote:
>
> On Feb 28, 2012, at 9:33 AM, Jamie Johnson wrote:
>
>> where specifically this i
On Feb 28, 2012, at 9:33 AM, Jamie Johnson wrote:
> where specifically this is on the roadmap for SolrCloud. Anyone
> else have those details?
I think we would like to do this sometime in the near future, but I don't know
exactly what time frame fits in yet. There is a lot to do still, and we
Very interesting Andre. I believe this is inline with the larger
vision, specifically you'd use the hashing algorithm to create the
initial splits in the forwarding table, then if you needed to add a
new shard you'd need to split/merge an existing range. I think
creating the algorithm is probably
Consistent hashing seem like a solution to reduce the shuffling of keys
when adding/deleting shards :
http://www.tomkleinpeter.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
Twitter describe a more flexible sharding in section "Gizzard handles
partitioning through a forwarding tabl
There is more, but it was lost since I was doing so many inserts. The
issue looks like it's getting thrown by a custom FilterFactory I have
and is completely unrelated to SolrCloud. I am trying to confirm now
though.
On Thu, Feb 9, 2012 at 3:03 PM, Mark Miller wrote:
> Is that the entire stack
Is that the entire stack trace - no other exception logged?
On Feb 9, 2012, at 2:44 PM, Jamie Johnson wrote:
> I just ran a test with a very modest cluster (exactly the same as
> http://outerthought.org/blog/491-ot.html). I then indexed 10,000
> documents into the cluster. From what I can tell
The case is actually anytime you need to add another shard. With the
current implementation if you need to add a new shard the current
hashing approach breaks down. Even with many small shards I think you
still have this issue when you're adding/updating/deleting docs. I'm
definitely interested
If this is to do load balancing, the usual solution is to use many
small shards, so you can just move one or two without doing any
surgery on indexes.
On Sat, Jan 28, 2012 at 2:46 PM, Yonik Seeley
wrote:
> On Sat, Jan 28, 2012 at 3:45 PM, Jamie Johnson wrote:
>> Second question, I know there are
On Sat, Jan 28, 2012 at 3:45 PM, Jamie Johnson wrote:
> Second question, I know there are discussion about storing the shard
> assignments in ZK (i.e. shard 1 is responsible for hashed values
> between 0 and 10, shard 2 is responsible for hashed values between 11
> and 20, etc), this isn't done ye
Thanks Yonik! I had not dug deeply into it but had expected to find a
class named Murmur which I did not.
Second question, I know there are discussion about storing the shard
assignments in ZK (i.e. shard 1 is responsible for hashed values
between 0 and 10, shard 2 is responsible for hashed value
On Fri, Jan 27, 2012 at 11:46 PM, Jamie Johnson wrote:
> I just want to verify some of the features in regards to SolrCloud
> that are now on Trunk
>
> documents added to the cluster are automatically distributed amongst
> the available shards (I had seen that Yonik had ported the Murmur
> hash, b
12 matches
Mail list logo