org.apache.catalina.util.InstanceSupport.fireInstanceEvent

2007-02-22 Thread Dominik Pospisil
Hello,

I am trying to configure Tomcat to handle very large number (5000) of 
simultaneous client connections via HTTP  with keepalive. All the clients 
produce equal constant load.

The problem is that before I reach maximum CPU load, everythings works fine, 
clients are served equally with very low response times (<50 ms).  But as I 
pass full server saturation, one should expect that the server will still be 
able to serve all the clients with accordingly higher response times. But it 
looks that it is not the case. When I increase number of clients, nearly 
constatnt number of clients remains served with low response times and the 
rest of the clients are not served at all, connections are stalled.

I am using 5.5 now, but i checked 6.0 sources and it seems that implementation 
is the same.

I did tomcat thread dump and found that:

about 1/3 of threads are waiting in:

"http-10.68.1.19-8080-3077" daemon prio=1 tid=0xa99420a0
nid=0x29c6 runnable [0x45107000..0x45107e30]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)

which is perfect, but about 2/3 of thread are waiting for single lock in:

"http-10.68.1.19-8080-3078" daemon prio=1 tid=0xa9942e60
nid=0x29c7 waiting for monitor entry [0x45086000..0x45086eb0]
at 
org.apache.catalina.util.InstanceSupport.fireInstanceEvent(InstanceSupport.java:180)
- waiting to lock <0xb2c337a8> (a [Lorg.apache.catalina.InstanceListener;)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:187)

So I am thinking if this could not be the problem. Is is necessary to be this 
implementation synchronized?

Thanks for any comments,

Dominik

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: org.apache.catalina.util.InstanceSupport.fireInstanceEvent

2007-02-28 Thread Dominik Pospisil
> Dominik Pospisil wrote:
> > So I am thinking if this could not be the problem. Is is necessary to be
> > this implementation synchronized?
>
> Given the implementation, you are not supposed to be using instance
> listeners at this time except for debugging purposes.
>
> Since InstanceSupport is array based, I don't see the point of the
> sync+clone which happens in there.
>
> Rémy

Rémy,

thanks for fixing that issue. In my test I can see ~30% improvement in number 
of concurrent sessions correctly served by single Tomcat instance. But still 
it did not solved it completely.

My idea is, that if there are enough memory and IO resources, Tomcat will be 
able to handle all the clients "equally". So if there are clients which 
produce equal constant load they should be served all and with the same 
average response times. Do you think that it is achievable?

The question is what should "equally" exactly mean. I know that in real 
scenario there are various clients with different connections and injection 
rates so just a simple FIFO rule would not be sufficient. Moreover, I am new 
to Tomcat internals and at this point I have no idea of how it should work at 
all.

But what about general idea of having some scheduler which will somehow 
control thread execution? Is it good idea or something completely wrong?

Thanks for any comments,

Dominik

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]