I think that the container orchestration framework takes care of that for you, 
but I am not an expert.  In Kubernetes, NGINX is often the Ingress controller, 
and as long as the services are running within the Kubernetes cluster, it can 
also serve as a load balancer, AFAICT.   In Kubernetes, a "Load Balancer" 
appears to be a concept for accessing services outside the cluster.

I presume you are using Kubernetes because of your reference to helm, but for 
what it's worth, here's an official haproxy image - 
https://hub.docker.com/_/haproxy

> -----Original Message-----
> From: Boban Acimovic <b...@it-agenten.com>
> Sent: Monday, February 11, 2019 11:58 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Load balance writes
> 
> Can you mention one dockerized load balancer? Or even better one with
> Helm chart?
> 
> 
> Like I said, I send all updates at the moment just to one out of 12 nodes.
> 
> 
> 
> 
> > On 11. Feb 2019, at 17:52, Walter Underwood
> <wun...@wunderwood.org> wrote:
> >
> > Why would you want to write a load balancer when there are so many that
> are free and very fast?
> >
> > For update traffic, there is very little benefit in sending updates 
> > directly to
> the shard leader. Forwarding an update to the leader is fast. Indexing is 
> slow.
> So the bottleneck is always at the leader.
> >
> > Before you build anything, measure. Collect a large update and send that
> directly to the leader. Then do the same to a non-leader shard. Compare the
> speed. If you are batching and indexing with multiple threads, I doubt you’ll
> see a meaningful difference. I commonly see 10% difference in identical load
> benchmarks, so the speedup has to be much larger than that to be real.
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)

Reply via email to