David S. Miller wrote:
From: Francois Romieu <[EMAIL PROTECTED]>
Date: Fri, 9 Dec 2005 00:09:47 +0100
Rick Jones <[EMAIL PROTECTED]> :
[...]
Does it really need to be particularly aggressive about that? How often
are there great streams of small packets sitting in a socket buffer? One
really only cares when the system starts getting memory challenged right?
Until then does it really matter if there are 100 64 byte chunks of data
sitting in 1500 byte buffers?
It is hard to believe that a poor placement of data and extra bloat do not
matter.
Also, perhaps Rick has never watched the packet trace while playing a
networked game such as quake3. It's a constant full-on stream of
64-byte UDP packets ;-)
Without copybreak, it's pretty easy to overrun the UDP socket's
receive queue and you'll lose some game events when that happens.
It's also not about a memory challenged system, the individual
socket buffer limits are what matters. And that is severely
hampered when a 1500 byte buffer is holding a 64-byte packet.
The socket gets charged for the 1500 byte buffer, and that charge
goes towards the per-socket receive buffer limits.
Which then just has me asking if the per-socket overhead limits are appropriate
when the system isn't under memory pressure :) Otherwise it seems the
per-socket overhead limits are creating artificial, local, memory pressure.
rick
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html