Usually, responses are due to I/O waits getting the data off of the disk. So, to me, this seems more likely because as you bombard the server with queries, you cause more and more of the data needed to answer the query into memory.
To verify this, I'd bombard your server with queries to warm it up, and then repeat your test with the queries coming in slowly or quickly. If it still holds up, then there is something other than Solr going on with that server, and taking memory from Solr or your index is somewhat too big for your server. Linux likes to overcommit memory - try setting vm swappiness to something low, like 10, rather than the default 60. Look for anything on the server with Solr that may be competing with it for I/O resources, and causing its pages to swap out. Also, look at the size of your index data. These are general advises in dealing with inverted indexes - some of the Solr engineers on this list may have some very specific ideas, such as merging activity or other background tasks running when the query load is lighter. I wouldn't know how to check for these things, but would thing they wouldn't affect query response time that badly. -----Original Message----- From: Vidhya Kailash <vidhya.kail...@gmail.com> Sent: Wednesday, October 24, 2018 4:22 PM To: solr-user@lucene.apache.org Subject: Solr cluster tuning We are currently using Solr Cloud Version 7.4 with SolrJ api to fetch data from collections. We recently deployed our code to production and noticed that response time is more if the number of incoming requests are less. But strangely, if we bombard the system with more and more requests we get much better response time. My suspicion is client is closing the connections sooner in case of slower requests and slower in case of faster requests. We tried tuning by passing custom HTTPClient to SolrJ and also by updating HttpShardHandlerFactory settings. For example we made - maxThreadIdleTime = 60000 socketTimeOut = 180000 Wondering what other tuning we can do to make this perform the same irrespective of the number of requests. Thanks! Vidhya