Rainer Jung wrote:
Jess,
I didn't really carefully think about the case of a saturated pool.
But nevertheless some hints:
1) There is always one thread waiting for the accept, which is a usual
pool thread. So an offset of one between threeads processing requests
and the pool size is normal.
Got that, but I've accounted for that via maxThreads of 51 and
BalanceMember max of 50.
I'm left wondering why there's an off-by-one beyond this (and thus an
off-by-2 overall).
2) There is no orderly shutdown in AJP13 for a connection, neither
from the httpd side, not from the Tomcat side. mod_jk will realize
closed connections and reopen them, but I have the impression, that
mod_proxy fails a backend when it gets a connection error.
Tomcat in my experience reliably closes a connection and frees the
thread, when the web server does.
Most of the time it makes sense to have a connectionTimeout
(Milliseconds) and a connection_pool_timeout (seconds, mod_jk) resp.
ttl (seconds, mod_proxy) which are in sync. But: they will never
harmonize completely, because mod_jk only checks for the need to throw
away closed connections during maintenance (every 60 seconds or
whatever is configured with worker.maintain) and I think mod_proxy
checks the ttl whenever a connection is put back into the pool.
I don't think any of those should be involved in this short test.
3) Does it happen with mod_jk too?
I believe so. I did some mod_jk testing to verify the overall nature of
the issue remained the same, but I didn't go through all the same
detailed tests. I could redo this particular test if that's helpful.
4) Weird guesses:
- max is limited with mod_proxy to the number of threads per process
configured in your MPM (worker?).
This is 25 by default. So if we want to have an easy to understand
scenario, set your threads per process to say 60 and min=max=50 as
well as removing the smax and the ttl.. That way 50 connections should
be started on startup (per httpd process; check with netstat), and we
shouldn't see any resizing during your ab test. Now start your ab test
before Tomcat will timeout (I remember that connectionTimeout had some
default value, even if you don't set it, but it is in the minutes).
My Apache MaxClients is 300 and this is on Windows so I only have 1
worker process.
If you don't run into trouble then, we know your observation has to do
with resizing of connection pools.
*Maybe*: ab is too fast and can come back with new requests faster,
than the connections go back into the pool after the end of a request
in httpd. Not very reasonable, but possible. Of cource the pool is
synchronized and the lock doesn't need to behave fair, i.e. when it
gets contended, it's not clear if the oldest waiting thread gets it
first.
I believe I disproved this at some point by running 2 tests with -n 50
and -c 50 manually back to back, but I'd have to re-test to be sure.
[I'm wishing I'd taken better notes of various results...]
Apart from this weird edge condition (an off-by-2 isn't that devastating
if it stays "2" in all cases), the thing that gets me overall is that
the documentation really makes it sound like "acceptCount" works like a
fair queue and that there's no harm in exceeding maxThreads except that
requests will be queued. As Bill suggested, I should come up with
suggested patches to the documentation -- I'm just not yet confident
enough in my understanding to propose such patches. At this point all I
can propose is strong warning verbage around maxThreads and acceptCount
regarding their behavior for the Java AJP connector.
--
Jess Holle
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]