On 4/19/2018 6:28 AM, Bernd Fehling wrote:
How would you setup a SolrCloud an why?


  shard1       shard2       shard3
--------     --------     --------
| ---- |     | ---- |     | ---- |
| |r1| |     | |r1| |     | |r1| |
| ---- |     | ---- |     | ---- |
|      |     |      |     |      |
| ---- |     | ---- |     | ---- |
| |r2| |     | |r2| |     | |r2| |
| ---- |     | ---- |     | ---- |
|      |     |      |     |      |
| ---- |     | ---- |     | ---- |
| |r3| |     | |r3| |     | |r3| |
| ---- |     | ---- |     | ---- |
--------     --------     --------
  host1        host2        host3

I'm assuming that "r1" means replica1.

If you set it up this way, you lose one third of the whole index (all replicas of one shard) if *any* host goes down.  All queries will fail in that situation if shards.tolerant is not set.  With shards.tolerant=true, you would get partial results.

So you have three machines that are all single points of failure.  This setup is a bad idea.

         --------     --------     --------
         | ---- |     | ---- |     | ---- |
shard1  | |r1| |     | |r2| |     | |r3| |
         | ---- |     | ---- |     | ---- |
         |      |     |      |     |      |
         | ---- |     | ---- |     | ---- |
shard2  | |r1| |     | |r2| |     | |r3| |
         | ---- |     | ---- |     | ---- |
         |      |     |      |     |      |
         | ---- |     | ---- |     | ---- |
shard3  | |r1| |     | |r2| |     | |r3| |
         | ---- |     | ---- |     | ---- |
         --------     --------     --------
          host1        host2        host3

With this setup, when any host fails, you still have two working replicas of all shards.  If two hosts fail, you still have one working replica.  There are no single points of failure, as long as your clients are able to direct queries to a working replica.  SolrJ clients using CloudSolrClient will do this automatically.  Other clients may need a load balancer sitting in front of the cloud.

This is the recommended way of setting up replicas.

Thanks,
Shawn

Reply via email to