If I start with a collection X on two nodes with one shard and two replicas 
(for redundancy, in case a node goes down): a node on host1 has 
X_shard1_replica1 and a node on host2 has X_shard1_replica2: when I try 
SPLITSHARD, I generally get X_shard1_0_replica1, X_shard1_1_replica1 and 
X_shard1_0_replica0 all on the node on host1 with X_shard1_1_replica0 sitting 
alone on the node on host2. If host1 were to go down at this point, shard1_0 
would be unavailable.

I realize I do have the option to ADDREPLICA creating X_shard1_0_replica2 on 
the node on host2 and then to DELETEREPLICA for X_shard1_0_replica0: but I 
don't see the logic behind requiring this extra step. Of the half dozen times I 
have experimented with SPLITSHARD (starting with one shard and two replicas on 
separate nodes), it always puts three-out-of-four of the new cores on the same 
node.

Is there a way either of specifying placement or of giving hints that replicas 
ought to be separated?

I am currently running Solr6.6.0, if that is relevant.

Reply via email to