> On the other hand, the CloudSolrClient ignores errors from Solr, which makes 
> it unacceptable for production use.

Did you mean "ConcurrentUpdateSolrClient"?  I don't think
CloudSolrClient does this, though I've been surprised before and
possible I just missed something.  Just wondering.

Jason

On Mon, Feb 11, 2019 at 2:14 PM Walter Underwood <wun...@wunderwood.org> wrote:
>
> The update router would also need to look for failures indexing at each 
> leader,
> then re-read the cluster state to see if the leader had changed. Also re-send 
> any
> failed updates, and so on.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Feb 11, 2019, at 11:07 AM, lstusr 5u93n4 <lstusr...@gmail.com> wrote:
> >
> > Hi Boban,
> >
> > First of all: I agree with Walter here. Because the bottleneck is during
> > indexing on the leader, a basic round robin load balancer will perform just
> > as well as a custom solution. With far less headache. A custom solution
> > will be far more work than it's worth.
> >
> > But, should you really want to write this yourself, you can get all of the
> > information you need from zookeeper, from the path:
> >
> > <zkroot>/collections/<collection_name>/state.json
> >
> > There, for each shard you'll see:
> >  - the "range" parameter that tells  you which subset of documents this
> > shard is responsible for (see
> > https://lucene.apache.org/solr/guide/7_6/shards-and-indexing-data-in-solrcloud.html#document-routing
> > for details on routing)
> >  - the list of all replicas. On each replica it will tell you:
> >      - the host name (base_url)
> >      - if it is the leader (has the property leader: true)
> >
> > So your go-based solution would be to watch the state.json file from
> > zookeeper, and build up a function that, given the proper routing structure
> > for your document (the hash of the id by default, I think) will return the
> > hostname of the replica that's the leader.
> >
> > Kyle
> >
> > On Mon, 11 Feb 2019 at 13:30, Boban Acimovic <b...@it-agenten.com> wrote:
> >
> >> Like I said before, nginx is not a load balancer or at least not a clever
> >> load balancer. It does not talk to ZK. Please give me advanced solutions.
> >>
> >>
> >>
> >>
> >>> On 11. Feb 2019, at 18:32, Walter Underwood <wun...@wunderwood.org>
> >> wrote:
> >>>
> >>> I haven’t used Kubernetes, but a web search for “helm nginx” seems to
> >> give some useful pages.
> >>>
> >>> wunder
> >>> Walter Underwood
> >>> wun...@wunderwood.org
> >>> http://observer.wunderwood.org/  (my blog)
> >>>
> >>>> On Feb 11, 2019, at 9:13 AM, Davis, Daniel (NIH/NLM) [C] <
> >> daniel.da...@nih.gov> wrote:
> >>>>
> >>>> I think that the container orchestration framework takes care of that
> >> for you, but I am not an expert.  In Kubernetes, NGINX is often the Ingress
> >> controller, and as long as the services are running within the Kubernetes
> >> cluster, it can also serve as a load balancer, AFAICT.   In Kubernetes, a
> >> "Load Balancer" appears to be a concept for accessing services outside the
> >> cluster.
> >>>>
> >>>> I presume you are using Kubernetes because of your reference to helm,
> >> but for what it's worth, here's an official haproxy image -
> >> https://hub.docker.com/_/haproxy
> >>
>

Reply via email to