Tiwari, Shailendra wrote:
> We are on Solr 4.10.3. Got 2 load balanced RedHat with 16 GB
> memory on each. Memory assigned to JVM 4 GB, 2 Shards,
> total docs 60 K, and 4 replicas.
As you are chasing throughput, you should aim to lower the overall resources
needed for a single request, potentia
: solr-user
Subject: Re: Errors During Load Test
The short form is "add more replicas", assuming you're using SolrCloud.
If older-style master/slave, then "add more slaves". Solr request processing
scales pretty linearly with the number of replicas (or slaves).
Note that th
The short form is "add more replicas", assuming you're using SolrCloud.
If older-style master/slave, then "add more slaves". Solr request processing
scales pretty linearly with the number of replicas (or slaves).
Note that this is _not_ adding shards (assuming SolrCloud). You usually add
shards w
What is your solr setup -- nodes/shards/specs?
7221 requests/min is a lot so it's likely that your solr setup simply isn't
able to support this kind of load which results in the requests timing out
which is why you keep seeing the timeout and connect exceptions.
On Thu, 4 Feb 2016, 20:30 Tiwari, S
Hi All,
We did our first load test on Search (Solr) API, and started to see some errors
after 2000 Users. Errors used to go away after 30 seconds, but keep happening
frequently. Errors were "java.net.SocketTimeoutException" and
"org.apache.http.conn.HttpHostConnectException". We were using JMet