On 3/16/2018 2:21 PM, Deepak Goel wrote:
> I wanted to test how many max connections can Solr handle concurrently.
> Also I would have to implement an 'connection pooling' of the client-object
> connections rather than a single connection thread
>
> However a single client object with thousands of queries coming in would
> surely become a bottleneck. I can test this scenario too.

Handling thousands of simultaneous queries is NOT something you can
expect a single Solr server to do.  It's not going to happen.  It
wouldn't happen with ES, either.  Handling that much load requires load
balancing to a LOT of servers.  The server would much more of a
bottleneck than the client.

> The problem is the max throughput which I can get on the machine is around
> 28 tps, even though I increase the load further & only 65% CPU is utilised
> (there is still 35% which is not being used). This clearly indicates the
> software is a problem as there is enough hardware resources.

If your code is creating a client object before every single query, that
could be part of the issue.  The benchmark code should be using the same
client for all requests.  I really don't know how long it takes to
create HttpSolrClient objects, but I don't imagine that it's super-fast.

What version of SolrJ were you using?

Depending on the SolrJ version you may need to create the client with a
custom HttpClient object in order to allow it to handle plenty of
threads.  This is how I create client objects in my SolrJ code:

  RequestConfig rc = RequestConfig.custom().setConnectTimeout(2000)
    .setSocketTimeout(60000).build();
  CloseableHttpClient httpClient =
HttpClients.custom().setDefaultRequestConfig(rc).setMaxConnPerRoute(1024)
    .setMaxConnTotal(4096).disableAutomaticRetries().build();

  SolrClient sc = new HttpSolrClient.Builder().withBaseSolrUrl(solrUrl)
    .withHttpClient(httpClient).build();

Thanks,
Shawn

Reply via email to