On 7/27/2018 8:26 PM, Erick Erickson wrote:
> Yes with some fiddling as far as "placement rules", start here:
> https://lucene.apache.org/solr/guide/6_6/rule-based-replica-placement.html
>
> The idea (IIUC) is that you provide a snitch" that identifies what
> "rack" the Solr instance is on and can define placement rules that
> define "don't put more than one thingy on the same rack". "Thingy"
> here is replica, shard, whatever as defined by other placement rules.

I'd like to see an improvement in Solr's behavior when nothing has been
configured in auto-scaling or rule-based replica placement.  Configuring
those things is certainly an option, but I think we can do better even
without that config.

I believe that Solr already has some default intelligence that keeps
multiple replicas from ending up on the same *node* when possible ... I
would like this to also be aware of *hosts*.

Craig hasn't yet indicated whether there is more than one node per host,
so I don't know whether the behavior he's seeing should be considered a bug.

If somebody gives one machine multiple names/addresses and uses
different hostnames in their SolrCloud config for one actual host, then
it wouldn't be able to do any better than it does now, but if there are
matches in the hostname part of different entries in live_nodes, then I
think the improvement might be relatively easy.  Not saying that I know
what to do, but somebody who is familiar with the Collections API code
can probably do it.

Thanks,
Shawn

Reply via email to