On 21/04/2011 20:21, Mark Thomas wrote:
> On 06/04/2011 22:51, Tim Whittington wrote:
>> On Wed, Apr 6, 2011 at 11:16 PM, Mark Thomas <ma...@apache.org> wrote:
>>> On 05/04/2011 10:50, Tim Whittington wrote:
>>>> Is what's actually going on more like:
>>>>
>>>> APR: use maxConnections == pollerSize (smallest will limit, but if
>>>> pollerSize < maxConnections then the socket backlog effectively won't
>>>> be used as the poller will keep killing connections as they come in)
>>>>
>>>> NIO: use maxConnections to limit 'poller size'
>>>>
>>>> HTTP: use maxConnections. For keep alive situations, reduce
>>>> maxConnections to something closer to maxThreads (the default config
>>>> is 10,000 keepalive connections serviced by 200 threads with a 60
>>>> second keepalive timeout, which could lead to some large backlogs of
>>>> connected sockets that take 50 minutes to get serviced)
> 
> This is still an issue. I'm still thinking about how to address it. My
> current thinking is:
> - BIO: Introduce simulated polling using a short timeout (see below)
> - NIO: Leave as is
> - APR: Make maxConnections and pollerSize synonyms
> - All: Make the default for maxConnections 8192 so it is consistent with
> the current APR default.
> 
> The other option is dropping maxConnections entirely from the NIO and
> APR connectors. That would align the code with the docs. The only
> downside is that the NIO connector would no longer have an option to
> limit the connections. I'm not sure that is much of an issue since I
> don't recall any demands for such a limit from the user community.

Apologies for what I expect will turn out to be a long e-mail.

I have reached the point where I believe the best way forward is:
- remove maxConnections from NIO and APR
- remove the ability to set maxConnections for BIO and hard code it to
maxThreads
- restore the disable keep-alive when >75% BIO threads are in use

My reasoning is as follows:
- Servlet 3.0 async requests mean that current connections in use may be
greater then current threads in use.

- This provides potential efficiency savings as less threads are required.

- That connections may be greater than threads led to the new
maxConnections attribute.

- maxConnections > maxThreads introduces an issue where a connection
with data may be in the connection queue waiting for a thread whilst all
the threads are busy doing nothing waiting for data on connections that
will eventually time out.

- This issue can be fixed with simulated polling.

- Testing showed that simulated polling was very CPU intensive (I saw a
typical increase from ~40% to ~50% CPU usage with 4 Tomcat threads, 2
'fast' client threads making requests as fast as they could, 10 'slow'
client threads making a request every 5s and a pollTime of 100ms on an
8-core machine.

- The additional resources required by simulated polling are likely to
be greater than those saved by reduced thread usage.

- It is therefore better to just increase maxThreads, expecting that not
all of them will be used and hard-code maxConnections to the same number
as maxThreads. Better still, just use NIO.

Further, the complexity of BIO code required to support:
- Optional configuration of maxConnections > maxThreads
- simulated polling when maxConnections > maxThreads
- auto-disabling of keep-alive for users that don't want the overhead of
simulated polling when maxConnections == maxThreads
was getting to the point where I had stability concerns.

Given the above, and assuming there are no objections, I intend to
implement the way forward I set out above tomorrow.

Cheers,

Mark



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to