On 5/4/2011 9:54 AM, Mark Thomas wrote:
On 04/05/2011 16:17, Filip Hanik - Dev Lists wrote:
On 5/3/2011 2:02 PM, Mark Thomas wrote:
In a similar fashion, we can also craft a test run that will yield a
substantial improvement over the old implementation in throughput.
So there is a test case to prove every scenario.
Could you outline what a test case looks like. It would help with the
general understanding of what problem maxConnections is trying to solve.

ok, we have an acceptor thread(AT) that does ServerSocket.accept() in an 
endless loop.
In the previous implementation, the AT would accept the socket, then wait for a 
thread to become available to handle the connection.
New incoming connections would then be handled by the backlog in the operating system. The old implementation was extremely unfair in how it handled the requests, some requests could get handled right away, while others could wait for long period of times (this is with the old impl). As you may be familiar, a client's connection may "die" in the backlog at which point the client has to attempt a new connection.

if you really want a simple test case, then do
maxThreads=200
clients=200
keepalive=on

In the old impl, keep alive would be turned off and performance would suffer, even though the system has plenty of resources to handle it. while this test case is very narrow and simple, it's the other extreme of the use case you presented.

The new implementation, queue based, was as a result to be able to disconnect a thread from a socket due to the new async requirements. Previously, a thread was married to a socket for as long as the socket was alive.

Anyway, with the new implementation, just like with NIO, there is no longer a stopper on the acceptor thread (AT) it will happily keep accepting connections until you run out of buffer space or port numbers. This presents a DoS risk, this risk has existed in NIO for a while. So maxConnections has been put in place to stop accepting connections, and push back new connections into the backlog.

so maxConnections exists to stop the acceptor thread from taking in more than 
it can handle.

Here is what I propose, and you'll see that it's pretty much inline with
what you suggest.
Yep. That works for me. I do have some additional questions around
maxConnections - mainly so I can get the docs right.

c) remove the configuration options for maxConnections from the BIO
connector
I think you still misunderstand why maxConnections is there, at some
point you need to push back on the TCP stack.
Some more detail on exactly the purpose of maxConnections would be
useful. The purposes I can see are:
- limiting connections since the addition of the queue means they are
not limited by maxThreads
correct, a system with
maxThreads=200 should be able to handle connections=500 with keep alive on and 
perform very well.

- fair (order received) processing of connections?
correct. almost no clients use pipelined requests, so chances that there is data on a new finished request is very slim. It is more probable that there is data on a request that was finished earlier in the cycle.

I hope that explains it. And by default, with the config options/defaults I suggested, you'll get the exact behavior of the old connector, but can still benefit from the new connector logic
- ?

Mark



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1321 / Virus Database: 1500/3615 - Release Date: 05/04/11




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to