It can't be considered a bug , it's just that there are too many
calculations involved as there are a very large no:of nodes. Any further
spped up would require a change in the way it's calculated

On Thu, Sep 5, 2019, 1:30 AM Andrew Kettmann <andrew.kettm...@evolve24.com>
wrote:

>
> > there are known perf issues in computing very large clusters
>
> Is there any documentation/open tickets on this that you have handy? If
> that is the case, then we might be back to looking at separate Znodes.
> Right now if we provide a nodeset on collection creation, it is creating
> them quickly. I don't want to make many changes as this is part of our
> production at this time.
>
>
>
>
>
> From: Noble Paul <noble.p...@gmail.com>
>
> Sent: Wednesday, September 4, 2019 12:14 AM
>
> To: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
>
> Subject: Re: Solr 7.7.2 Autoscaling policy - Poor performance
>
>
>
>
> there are known perf issues in computing very large clusters
>
>
>
> give it a try with the following rules
>
>
>
> "FOO_CUSTOMER":[
>
>       {
>
>         "replica":"0",
>
>         "sysprop.HELM_CHART":"!FOO_CUSTOMER",
>
>         "strict":"true"},
>
>       {
>
>         "replica":"<2",
>
>         "node":"#ANY",
>
>         "strict":"false"}]
>
>
>
> On Wed, Sep 4, 2019 at 1:49 AM Andrew Kettmann
>
> <andrew.kettm...@evolve24.com> wrote:
>
> >
>
> > Currently our 7.7.2 cluster has ~600 hosts and each collection is using
> an autoscaling policy based on system property. Our goal is a single core
> per host (container, running on K8S). However as we have rolled more
> containers/collections into the cluster
>  any creation/move actions are taking a huge amount of time. In fact we
> generally hit the 180 second timeout if we don't schedule it as async.
> Though the action gets completed anyway. Looking at the code, it looks like
> for each core it is considering the entire
>  cluster.
>
> >
>
> > Right now our autoscaling policies look like this, note we are feeding a
> sysprop on startup for each collection to map to specific containers:
>
> >
>
> > "FOO_CUSTOMER":[
>
> >       {
>
> >         "replica":"#ALL",
>
> >         "sysprop.HELM_CHART":"FOO_CUSTOMER",
>
> >         "strict":"true"},
>
> >       {
>
> >         "replica":"<2",
>
> >         "node":"#ANY",
>
> >         "strict":"false"}]
>
> >
>
> > Does name based filtering allow wildcards ? Also would that likely fix
> the issue of the time it takes for Solr to decide where cores can go? Or
> any other suggestions for making this more efficient on the Solr overseer?
> We do have dedicated overseer nodes,
>  but the leader maxes out CPU for awhile while it is thinking about this.
>
> >
>
> > We are considering putting each collection into its own zookeeper
> znode/chroot if we can't support this many nodes per overseer. I would like
> to avoid that if possible, but also creating a collection in sub 10 minutes
> would be neat too.
>
> >
>
> > I appreciate any input/suggestions anyone has!
>
> >
>
> > [https://storage.googleapis.com/e24-email-images/e24logonotag.png]<
> https://www.evolve24.com> Andrew Kettmann
>
> > DevOps Engineer
>
> > P: 1.314.596.2836
>
> > [LinkedIn]<https://linkedin.com/company/evolve24> [Twitter] <
> https://twitter.com/evolve24>  [Instagram] <
> https://www.instagram.com/evolve_24>
>
> >
>
> > evolve24 Confidential & Proprietary Statement: This email and any
> attachments are confidential and may contain information that is
> privileged, confidential or exempt from disclosure under applicable law. It
> is intended for the use of the recipients. If you
>  are not the intended recipient, or believe that you have received this
> communication in error, please do not read, print, copy, retransmit,
> disseminate, or otherwise use the information. Please delete this email and
> attachments, without reading, printing,
>  copying, forwarding or saving them, and notify the Sender immediately by
> reply email. No confidentiality or privilege is waived or lost by any
> transmission in error.
>
>
>
>
>
>
>
> --
>
> -----------------------------------------------------
>
> Noble Paul
>
>

Reply via email to