>------- Original Message -------
>From    : Arno Garrels[mailto:[EMAIL PROTECTED]
>
>> You can exchange data between threads the most
easy is by posting a
>> message where a pointer to the data is in WParam
argument. The
>> pointer can be freed in the custom message handler.

> That's indeed the fastest way since the thread must
not wait. 

However, if the main thread notified the slave
threads to quit, the last thread that quits may post
messages (before receiving the WM_QUIT message) to
the first one and fail, which will cause the memory
in the message to not be freed (until the application
finally quits).  I don't know if this is a real
concern, though.

> 1 - Stressing a server with 100 connection attempts
per second is most
> likely not a real world scenario, except upon DoS
attacks.

I agree.  However, this is very easily done by a
brain-dead developer using my queue client class in a
simple 'for' loop to send a lot of messages at once,
say, an announcement to all our customers.  I would
like to prevent this as much as possible by improving
connection acceptance speed on the server, or else
I'll have to cripple the client somehow.  Do not
underestimate the tenacity of morons. :)

> 2 - Run your stress tester against IIS or other
servers, I found that
> they were not able to accept more clients per
second than my server.  

I'm sure this is true.

I am able to avoid the whole issue by responsibly
designing the client application:  send the next
connection request after the first one triggers
OnSessionConnected, or connecting only a few clients
at a time, then pause until they are done.  This not
only improves performance of the server, but it
prevents an inadvertent DoS attack from an
application that needs to send lots of messages at once.

> 3 - I played with different designs. 

Which would you consider to work best?

> The goal is to accept clients as fast as possible,
once they are 
> connected it won't hurt to let them wait some
milliseconds.

This is indeed my goal.

Would it make sense to have a pool of listening
sockets in a separate (single) thread that will post
a message to the (single) working thread with the
socket handle?  That way the connections can be
established quickly, and my server can continue doing
its processing within a single thread so that I don't
have to redesign it right now.

   -dZ.

>Sent    : 11/29/2007 1:52:38 PM
>To      : [email protected]
>Cc      : 
>Subject : RE: Re: [twsocket] TWSocketServer and backlog
>
 >DZ-Jay wrote:
> On Nov 29, 2007, at 06:10, Wilfried Mestdagh wrote:
> 
>> Hello DZ-Jay,
>> 
>> So conclusion is that increasing the backlog does:
>>    - decrease the performance for accepting
connections
>>    - decrease the overall performance of the
application
> 
> This seems to be the conclusion of mine and Huby's
tests.

Strange, I never noticed something like that.
> Perhaps I should run the TWSocketServer on its own
thread, and post
> messages from the clients to the queue manager
thread to do the work?
> Although this seems too complex and expensive.  It
almost looks like
> each client should run on its own thread... :(

I'm that sure: 

1 - Stressing a server with 100 connection attempts
per second is most
likely not a real world scenario, except upon DoS
attacks.
2 - Run your stress tester against IIS or other
servers, I found that
they were not able to accept more clients per second
than my server.  
3 - I played with different designs. 
    a) Listening sockets in one thread, client
sockets in another thread(s).
       This introduces a new problem, clients are
accepted very fast,
       however the listening thread must synchronize
with the client
       thread(s) which may take longer than with
current TWSocketServer,
       I worked around by posting just the socket
handle to the thread
       which was fast, however also rather
complicated to handle all
       the client stuff/pool in the threads.
    b) Listening sockets in one thread, one thread
per client.
       AFAIR without a thread pool accepting clients
was slower than
       with TWSocketServer.
    c) I even hacked together a server that used M$
overlapped sockets,
       this was a rather disapointing discourse since
performance was
       the same as with (a). 

The goal is to accept clients as fast as possible,
once they are 
connected it won't hurt to let them wait some
milliseconds.

Before you rewrite your application I suggest you
code some test
apps. with different designs and compare their
performance.

--
Arno Garrels

> 
> dZ.
> 
> --
> DZ-Jay [TeamICS]
>  http://www.overbyte.be/eng/overbyte/teamics.html 
-- 
To unsubscribe or change your settings for TWSocket
mailing list
please goto 
http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket

Visit our website at  http://www.overbyte.be 


-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to