Jess,

I didn't really carefully think about the case of a saturated pool. But nevertheless some hints:

1) There is always one thread waiting for the accept, which is a usual pool thread. So an offset of one between threeads processing requests and the pool size is normal.

2) There is no orderly shutdown in AJP13 for a connection, neither from the httpd side, not from the Tomcat side. mod_jk will realize closed connections and reopen them, but I have the impression, that mod_proxy fails a backend when it gets a connection error.

Tomcat in my experience reliably closes a connection and frees the thread, when the web server does.

Most of the time it makes sense to have a connectionTimeout (Milliseconds) and a connection_pool_timeout (seconds, mod_jk) resp. ttl (seconds, mod_proxy) which are in sync. But: they will never harmonize completely, because mod_jk only checks for the need to throw away closed connections during maintenance (every 60 seconds or whatever is configured with worker.maintain) and I think mod_proxy checks the ttl whenever a connection is put back into the pool.

3) Does it happen with mod_jk too?

4) Weird guesses:

- max is limited with mod_proxy to the number of threads per process configured in your MPM (worker?).

This is 25 by default. So if we want to have an easy to understand scenario, set your threads per process to say 60 and min=max=50 as well as removing the smax and the ttl.. That way 50 connections should be started on startup (per httpd process; check with netstat), and we shouldn't see any resizing during your ab test. Now start your ab test before Tomcat will timeout (I remember that connectionTimeout had some default value, even if you don't set it, but it is in the minutes).

If you don't run into trouble then, we know your observation has to do with resizing of connection pools.

*Maybe*: ab is too fast and can come back with new requests faster, than the connections go back into the pool after the end of a request in httpd. Not very reasonable, but possible. Of cource the pool is synchronized and the lock doesn't need to behave fair, i.e. when it gets contended, it's not clear if the oldest waiting thread gets it first.

Regards,

Rainer

Jess Holle wrote:
Hmmmm....

I just redid my tests with:

   BalancerMember ajp://localhost:8010 min=15 max=*50* smax=30 ttl=900
   keepalive=Off timeout=900

and

       <Connector port="8010"
                  minSpareThreads="15" maxSpareThreads="30"
   maxThreads="*51*" acceptCount="*0*"
                  tomcatAuthentication="false"
   useBodyEncodingForURI="true" URIEncoding="UTF-8"
                  enableLookups="false" redirectPort="8443"
   protocol="AJP/1.3" />

and

   ab -n 100 -c *50* http://jessh03l.ptcnet.ptc.com/TestApp/test.jsp?secs=3

I (after about 3 seconds) get

   SEVERE: All threads (51) are currently busy, waiting. Increase
   maxThreads (51) or check the servlet status

and I eventually get exactly 1 lost request. I'm baffled as to why this can occur.

Something's still doesn't seem quite right here.

What's even weirder is that I only get this issue with the first ab run after restarting Tomcat. If I do the same test again (any number of times) I don't lose any requests.

I can get the same result by restarting and doing 2 ab runs with "-n 100" in fairly short secession, so this isn't some ab error. By "fairly short", I don't mean very short -- I left a good 5 seconds between runs.

I find that using a max of 49 in Apache seems to work reliably, but I'm struggling to understand: (1) why I have to have 2 more Tomcat connector threads allowed than the number of connections I can possibly have and (2) whether "2" is always the safe buffer or whether I'll need 9 on an 8 CPU box or some such.

--
Jess Holle

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to