On 8/18/2015 11:50 PM, William Bell wrote:
> We sometimes get a spike in Solr, and we get like 3K of threads and then
> timeouts...
> 
> In Solr 5.2.1 the defult jetty settings is kinda crazy for threads - since
> the value is HIGH!
> 
> What do others recommend?

The setting of 10000 is so that there is effectively no limit.  Solr
will stop working right if it is not allowed to start threads whenever
it wishes.  Solr is not a typical web application.

As far as I know (any my knowledge could be wrong), a typical web
application that serves a website to users will handle all back-end
details with the same thread(s) that were created when the connection
was opened.  Putting a relatively low limit on the number of threads in
that situation is sensible.

A very small Solr install with a low query volume will work in 200
threads (the default limit in most containers), but it doesn't take very
much to exceed that.

I have a Solr 4.9.1 dev install with 44 cores, running with the Jetty 8
example included in the 4.x download.  19 of those cores are build
cores, with 19 cores for live indexes.  The other six cores are always
empty, with a shards parameter in the search handler definition for
distributed searching.  This install does NOT run in SolrCloud mode.

This dev server sees very little traffic besides a few indexing requests
every minute and load balancer health checks.  JConsole shows the number
of threads hovering between 230 and 235.  If I scroll through the thread
list, most of them show a state of WAITING on various locking
mechanisms, which explains why my CPUs (8 CPU cores total) are not being
overwhelmed with work from all those threads.

Solr and Lucene don't really have a runaway thread problem as far as I
can tell, but the system does use a fair number of them for basic
operation, with more cores/collections adding more threads.  SolrCloud
mode will also use more threads.

If you send requests to Solr at a vary fast rate, the servlet container
may also use a lot of threads.

Thanks,
Shawn

Reply via email to