Mladen Turk wrote:
Filip Hanik - Dev Lists wrote:
The processSocketWithOptions is a blocking call, hence you wont be
able to acccept new connections as long as your worker threads are
all busy.
Not entirely true.
What we need to do, is set the socket options, then simply add the
socket to the poller waiting for a read event, the poller will assign
it a worker thread when one is available and the socket has data to
be read.
This was one of the major improvements I just did in the NIO
connector. otherwise you can't accept connections fast enough and
they will get dropped.
Basically you simulate the OS socket implementation pending connections
queue (backlog) on the OSI layer 7 instead depending on the layer 5
provided by the OS.
Well, you'd be nuts to not run your webserver without a kernel filter,
either 1-st byte accept on linux or the httpfilter on freebsd. My point
goes beyond just accepting connections.
So if we use those filters, we are not simulating the backlog, the
backlog is not needed if the OS through a kernel filter already takes
care of accepting the connection to the client.
By using relying on the backlog, you'll simply running the risk of not
serving those connections. The backlog should only be used if your
acceptor can't accept connections fast enough.
The only beneficiary for that would be lab test environment where you
have a lots of burst connections, then a void, then a connection burst
again (not an real-life situation thought). The point is that no matter
how large your queue is (and how it's done) if the connection rate is
higher then a processing rate, your connections will be rejected at some
point. So, it's all about tuning.
You're missing the point. I believe I mentioned it in the article, that
the challenge during this extreme concurrency conditions, (burst or no
burst), will become fairness. I'm in the process of implementing the
connection fairness in the NIO connector. Relying on
synchronized/wait/notify from multiple threads (acceptor and poller)
does not guarantee you anything, and you completely lose control of what
connection gets handled vs what should should get handled.
So I'm simplifying it, since its easier to implement proper fairness on
a single thread than on multiple threads. Hence, when an accept occurs,
set the socket options and register it with the poller.
Then let the poller become the "scheduler" if you may say so, ie, the
poller gets to decide what connections get handled. And to properly
achieve this, the poller nor the acceptor can get stuck on a
synchronized/wait state.
Remember, my goal is not simply to be able to accept as many connections
as possible, that would be pointless if I can't serve requests on them
anyway.
The ultimate goal is to have 20k connections and still handle them evenly.
Hope that makes sense, the article itself was just to demonstrate how
these implementations handled different situations. Although, I wouldn't
claim that a 500 connection burst is purely lab environment. Post an
article on digg.com and you'll running the risk of getting a burst just
like that :)
Filip
Regards,
Mladen.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]