Hi Filip and Rainer,

I found the following info to reduce the TIME_WAIT at windows:

===============

The TIME_WAIT problem is a very common one for Windows NT systems. Unlike most Unix systems, Windows NT does not have a generic setting for the TIME_WAIT interval modification. To modify this setting, you should create an entry in the Windows NT Registry (the information below is taken from the http://www.microsoft.com site:

Run Registry Editor (RegEdit.exe).
Go to the following key in the registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\tcpip\Parameters
Choose Add Value from the Edit menu and create the following entry:
Value Name:
TcpTimedWaitDelay
Data Type:
REG_DWORD
Value:
30-300 (decimal) - time in seconds
Default: 0xF0 (240 decimal) not in registry by default
Quit the Registry Editor
Restart the computer for the registry change to take effect.
Description: This parameter determines the length of time that a connection will stay in the TIME_WAIT state when being closed. While a connection is in the TIME_WAIT state, the socket pair cannot be reused. This is also known as the "2MSL" state, as by RFC the value should be twice the maximum segment lifetime on the network. See RFC793 for further details.

====


Regards
Peter



Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:

That's some very good info, it looks like my system never does go over 30k and cleaning it up seems to be working really well. btw. do you know where I change the cleanup intervals for linux 2.6 kernel?

I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.

so what was happening in my test was running 1000 requests over 400 connections, then invoking 1 request over 1 connection, and repeat. Every time I did the single connection request, it does a 1sec delay, this cause the CPU to drop.

So basically, the NIO connector sucks majorly if you are a single user :), I'll trace this one down.
Filip


Rainer Jung wrote:
Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them. They will be cleaned up every now and then by the kernel (talking about the unix/Linux style mechanisms) and then throughput (and CPU usage) start
again.

With modern systems handling 10-20k requests per second one can run into
trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT connections
(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:

Remy Maucherat wrote:

[EMAIL PROTECTED] wrote:

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels
or number of bytes
Added in nonGC poller events to lower CPU usage during high traffic
I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that
excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real world
(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so the NIO connector works properly now. Great ! However, I still have NIO which
is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.


thanks for the feedback, I'm testing with larger files now, 100k+ and
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket classes,
as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting to a
point with the NIO connector where it is a viable alternative.

Filip

-------------------------------------------------------------------- -
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to