Thanks. This was useful, really appreciate it! :)

On Tue, Jul 7, 2020, 8:07 PM Walter Underwood <wun...@wunderwood.org> wrote:

> Agreed, I do something between 20 and 1000. If the master node is not
> handling any search traffic, use twice as many client threads as there are
> CPUs in the node. That should get you close to 100% CPU utilization.
> One thread will be waiting while a batch is being processed and another
> thread will be sending the next batch so there is no pause in processing.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jul 7, 2020, at 6:12 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
> >
> > As many as you can send before blowing up.
> >
> > Really, the question is not answerable. 1K docs? 1G docs? 1 field or 500?
> >
> > And I don’t think it’s a good use of time to pursue much. See:
> >
> > https://lucidworks.com/post/really-batch-updates-solr-2/
> >
> > If you’re looking at trying to maximize throughput, adding
> > client threads that send Solr documents is a better approach.
> >
> > All that said, I usually just pick 1,000 and don’t worry about it.
> >
> > Best,
> > Erick
> >
> >> On Jul 7, 2020, at 8:59 AM, Sidharth Negi <sidharth.negi...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>
> >> Could someone help me with the best way to go about determining the
> maximum
> >> number of docs I can send in a single update call to Solr in a master /
> >> slave architecture.
> >>
> >> Thanks!
> >
>
>

Reply via email to