>
> We're currently working on a Solr collections operator which creates
> collections using the Solr operator to allocate the Solr nodes. The
> collections operator is where all the intelligence resides for creating
> collections that maximize resiliency on kubernetes.
>
Looks interesting Joel!!
> Improved cluster stability. Restarting the leader is far simpler than
> electing a new leader, peer syncing, index finger printing etc
(I'll assume a single TLOG replica on its own pod as I think Joel
suggested in his latest reply.)
Restarts are definitely simpler than leader-election, but I'
Kube may have solutions to your questions. It's mainly about carefully
constructing collections. One approach would be to place each tlog leader
in it's own pod and using node anti-affinitity rules to spread them across
kubernetes nodes and availability zones. We're currently working on a Solr
coll
The idea is tempting...
Limiting to one tlog replica per shard might not be sufficient though. What
if a node has too many shard leaders and we want to rebalance these across
the cluster to other nodes?
What if a node has some intrinsic issues (runs out of memory each time or
unable to start due to
As I get deeper into Solr on kube, I've begun to wonder if Solr leader
election on kube is an obsolete concept. Leader election was conceived when
hardware was not fungible. Now that hardware is fungible I wonder if it's
time to rethink the whole idea of leader election.
Consider the following sce