Hi all, I was going through the 5.5 code, and noticed something that will cause a problem for my service: users can hold open connections even if my servlet indicates to Tomcat that the connection should be dropped. In particular, my servlet replies with SC_REQUEST_ENTITY_TOO_LARGE, and expects Tomcat to thereafter close the connection. Unfortunately, as the following code shows, the processor will continue to happily consume as many bytes as the client sends, which is extra painful when the client is sending, say, 1 byte per second.
http://www.docjar.com/html/api/org/apache/coyote/http11/Http11Processor.java.html#x894 http://www.docjar.com/html/api/org/apache/coyote/http11/InternalInputBuffer.java.html#x368 http://www.docjar.com/html/api/org/apache/coyote/http11/filters/ChunkedInputFilter.java.html#x178 (Sorry if that's not the appropriate way to reference code). I wrote a little test where the servlet immediately responds with SC_REQUEST_ENTITY_TOO_LARGE (to a client that's slowly sending bytes over the wire), and the thread that returned that response shows this stack trace 30 seconds later: "http-8080-Processor8" daemon prio=1 tid=0x0850eb60 nid=0x23bd runnable [0xaed40000..0xaed40780] at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:747) at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:777) at org.apache.coyote.http11.filters.IdentityInputFilter.end(IdentityInputFilter.java:160) at org.apache.coyote.http11.InternalInputBuffer.endRequest(InternalInputBuffer.java:368) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:881) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684) at java.lang.Thread.run(Thread.java:595) There's a perplexing caveat: I configured my server with a maximum of 1 thread, but there are still 10 http processors. The first eight will sit in the above state forever, consuming bytes. The last two somehow manage to close the connection -- so my test client, with 50 threads, has the first 8 tying up the first 8 connections, and the last 42 get rejected one at a time by the last 2 server threads. Sorry if this is a well-known issue, if it has been fixed in 6.0, or if it's correct behavior. I'm just trying to figure out a sensible way of preventing malicious (or just dumb) users from causing this particular DOS scenario. Thanks! Aditya --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]