Thanks. This was useful, really appreciate it! :)
On Tue, Jul 7, 2020, 8:07 PM Walter Underwood wrote:
> Agreed, I do something between 20 and 1000. If the master node is not
> handling any search traffic, use twice as many client threads as there are
> CPUs in the node. That should get you clos
Agreed, I do something between 20 and 1000. If the master node is not
handling any search traffic, use twice as many client threads as there are
CPUs in the node. That should get you close to 100% CPU utilization.
One thread will be waiting while a batch is being processed and another
thread will
As many as you can send before blowing up.
Really, the question is not answerable. 1K docs? 1G docs? 1 field or 500?
And I don’t think it’s a good use of time to pursue much. See:
https://lucidworks.com/post/really-batch-updates-solr-2/
If you’re looking at trying to maximize throughput, adding
Hi,
Could someone help me with the best way to go about determining the maximum
number of docs I can send in a single update call to Solr in a master /
slave architecture.
Thanks!