This is only true the *first* time you start the cluster.  As mentioned
earlier, the correct way to assign shards to cores is to use the collection
API.  Failing that, you can start cores in a determined order, and the
cores will assign themselves a shard/replica when they first start up.
 From that point on, that mapping is defined in clusterstate.json, and will
persist until you change it (delete cluster state or use collection/core
API to move/remove a core.  It is a kludgy approach, that's why generally
it isn't recommended for new starters to use, but by starting the first
cores in a particular order you can get exactly the distribution you want.

The collection API is good generally because it has some logic to
distribute shards across machines, but you can't be very specific with it,
you can't say "I want shard 1 on machine A, and its replicas on machines b,
c & d). So we use the "start order" mechanism for our production systems,
because we want to place shards on specific machines., We have 256 shards,
so we want to know exactly what set of cores & machines is required in
order to have a "full collection" of data.  As long as you are aware of the
limitations of each mechanism, both work.


On 26 February 2014 10:26, Oliver Schrenk <oliver.schr...@gmail.com> wrote:

> > There is a round robin process when assigning nodes at cluster. If you
> want
> > to achieve what you want you should change your Solr start up order.
>
> Well that is just weird. To bring a cluster to a reproducible state, I
> have to bring the whole cluster down, and start it up again in a specific
> order?
>
> What order do you suggest, to have a failover mechanism?

Reply via email to