Hello,

I am trying to configure Tomcat to handle very large number (5000) of 
simultaneous client connections via HTTP  with keepalive. All the clients 
produce equal constant load.

The problem is that before I reach maximum CPU load, everythings works fine, 
clients are served equally with very low response times (<50 ms).  But as I 
pass full server saturation, one should expect that the server will still be 
able to serve all the clients with accordingly higher response times. But it 
looks that it is not the case. When I increase number of clients, nearly 
constatnt number of clients remains served with low response times and the 
rest of the clients are not served at all, connections are stalled.

I am using 5.5 now, but i checked 6.0 sources and it seems that implementation 
is the same.

I did tomcat thread dump and found that:

about 1/3 of threads are waiting in:

"http-10.68.1.19-8080-3077" daemon prio=1 tid=0xa99420a0
nid=0x29c6 runnable [0x45107000..0x45107e30]
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:129)

which is perfect, but about 2/3 of thread are waiting for single lock in:

"http-10.68.1.19-8080-3078" daemon prio=1 tid=0xa9942e60
nid=0x29c7 waiting for monitor entry [0x45086000..0x45086eb0]
    at 
org.apache.catalina.util.InstanceSupport.fireInstanceEvent(InstanceSupport.java:180)
    - waiting to lock <0xb2c337a8> (a [Lorg.apache.catalina.InstanceListener;)
    at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:187)

So I am thinking if this could not be the problem. Is is necessary to be this 
implementation synchronized?

Thanks for any comments,

Dominik

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to