Further to this, the docs for maxConnections currently state:

"This setting is currently only applicable to the blocking Java
connectors (AJP/HTTP)."

But both the NIO and APR connectors use this setting in their Acceptor
implementations, limiting the number of concurrent connections before
they're accepted and handed off to the pollers.

It's possibly the case for the APR connector that pollerSize should be
used in preference (keeping maxConnections > pollerSize), but the NIO
connector doesn't have a poller size config option (the connector
summary describes it as 'restricted by mem'.

The Connector Comparison table (and the discussion of
connection/thread behaviour) in the HTTP connector docs are thus
slightly misleading.

Is what's actually going on more like:

APR: use maxConnections == pollerSize (smallest will limit, but if
pollerSize < maxConnections then the socket backlog effectively won't
be used as the poller will keep killing connections as they come in)

NIO: use maxConnections to limit 'poller size'

HTTP: use maxConnections. For keep alive situations, reduce
maxConnections to something closer to maxThreads (the default config
is 10,000 keepalive connections serviced by 200 threads with a 60
second keepalive timeout, which could lead to some large backlogs of
connected sockets that take 50 minutes to get serviced)

cheers
tim

On Tue, Apr 5, 2011 at 8:51 PM, Tim Whittington <t...@apache.org> wrote:
> In the AJP standard implementation docs, the following are not
> mentioned, although they're properties of AbstractEndpoint and
> probably should work:
> - bindOnInit
> - maxConnections
> Am I right in assuming these should be possible in the AJP connector
> (my reading of the code indicates they are - just wanted to check if
> something arcane was going on)?
>
> If so I'll update the docs.
>
> cheers
> tim
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to