On Mon, 2019-03-18 at 10:47 +0000, Aaron Yingcai Sun wrote:
> Solr server is running on a quit powerful server, 32 cpus, 400GB RAM,
> while 300 GB is reserved for solr, [...]

300GB for Solr sounds excessive.

> Our application send 100 documents to solr per request, json encoded.
> the size is around 5M each time. some times the response time is
> under 1 seconds, some times could be 300 seconds, the slow response
> happens very often.
> ...
> There are around 100 clients sending those documents at the same
> time, but each for the client is blocking call which wait the http
> response then send the next one.

100 clients * 5MB/batch = at most 500MB. Or maybe you meant 100 clients
* 100 documents * 5MB/document = at most 50GB? Either way it is a long
way from 300GB and the stats you provide further down the thread
indicates that you are overprovisioning quite a lot:

"memory":{
      "free":"69.1 GB",
      "total":"180.2 GB",
      "max":"266.7 GB",
      "used":"111 GB (%41.6)",

Intermittent slow response times are a known effect of having large
Java heaps, due to stop-the-world garbage collections. 

Try dialing Xmx _way_ down: If your batches are only 5MB each, try
Xmx=20g or less. I know that the stats above says that Solr uses 111GB,
but the JVM has a tendency to expand the heap quite a lot when it is
getting hammered. If you want to check beforehand, you can see how much
memeory is freed from full GCs in the GC-log.

- Toke Eskildsen, Royal Danish Library


Reply via email to