Dominik Pospisil wrote:
Dominik Pospisil wrote:
So I am thinking if this could not be the problem. Is is necessary to be
this implementation synchronized?
Given the implementation, you are not supposed to be using instance
listeners at this time except for debugging purposes.

Since InstanceSupport is array based, I don't see the point of the
sync+clone which happens in there.

Rémy

Rémy,

thanks for fixing that issue. In my test I can see ~30% improvement in number of concurrent sessions correctly served by single Tomcat instance. But still it did not solved it completely.

My idea is, that if there are enough memory and IO resources, Tomcat will be able to handle all the clients "equally". So if there are clients which produce equal constant load they should be served all and with the same average response times. Do you think that it is achievable?

The question is what should "equally" exactly mean. I know that in real scenario there are various clients with different connections and injection rates so just a simple FIFO rule would not be sufficient. Moreover, I am new to Tomcat internals and at this point I have no idea of how it should work at all.

But what about general idea of having some scheduler which will somehow control thread execution? Is it good idea or something completely wrong?
there is no scheduler in tomcat, but the same logic has been implemented.
For example, connections are simply handled in the order they are accepted, for the blocking connector. On the regular connector, the notion of a scheduler is obsolete since you will never have more connections than you have threads, so its up to the operating system's scheduler to determine how threads are swapped onto the CPU.

For the Tomcat APR connector, the same as above, when a socket is accepted, the acceptor blocks until a thread is available to process the request. For keep alive connections, when invoking the poll() events, the connections will be handled in that exact order.
The APR poller blocks for an available worker thread.

The Tomcat 6 NIO connector works a little bit differently. Upon accepting a connection, the NIO acceptor doesn't block for an available worker thread, instead it registers it with the poller. The idea behind this is, when a socket is accepted, doesn't mean there is data to be read, hence no need to block a thread. Also, the NIO poller never blocks, if there is a socket ready for read, but no worker threads are available, the poller simply processes other events, and when
a thread is available, the socket gets handed off at that time.

So in these three scenarios, the only implementation that doesn't follow a strict first-come-first-serve scenario is the NIO implementation.

in a recent test, with 12k concurrent connections, it seems that the equality is very good, example results below.

Let me know if you have any more questions.
Filip


Server Software:        Apache-Coyote/1.1
Server Hostname:        testhost
Server Port:            8080

Document Path:          /load/bd?size=64
Document Length:        65536 bytes

Concurrency Level:      12000
Time taken for tests:   290.520087 seconds
Complete requests:      1200000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    1200000
Total transferred:      78958306058 bytes
HTML transferred:       78771832843 bytes
Requests per second:    4130.52 [#/sec] (mean)
Time per request:       2905.201 [ms] (mean)
Time per request:       0.242 [ms] (mean, across all concurrent requests)
Transfer rate:          265412.72 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0   80 1342.3      0   93017
Processing:   523 2804 955.4   2510    9082
Waiting:      161 2382 957.0   2099    8186
Total:        523 2884 1739.6   2513  100406

Percentage of the requests served within a certain time (ms)
 50%   2513
 66%   2693
 75%   2825
 80%   3659
 90%   4521
 95%   4946
 98%   5325
 99%   6050
100%  100406 (longest request)

Thanks for any comments,

Dominik

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to