On Wed, 14 Dec 2005, David S. Miller wrote:
> From: Matt Mackall <[EMAIL PROTECTED]>
> Date: Wed, 14 Dec 2005 19:39:37 -0800
>
> > I think we need a global receive pool and per-socket send pools.
>
> Mind telling everyone how you plan to make use of the global receive
> pool when the allocation ha
On Thu, 15 Dec 2005 06:42:45 +0100
Andi Kleen <[EMAIL PROTECTED]> wrote:
> On Wed, Dec 14, 2005 at 08:30:23PM -0800, David S. Miller wrote:
> > From: Matt Mackall <[EMAIL PROTECTED]>
> > Date: Wed, 14 Dec 2005 19:39:37 -0800
> >
> > > I think we need a global receive pool and per-socket send pool
On Wed, 14 Dec 2005 21:23:09 -0800 (PST)
"David S. Miller" <[EMAIL PROTECTED]> wrote:
> From: Matt Mackall <[EMAIL PROTECTED]>
> Date: Wed, 14 Dec 2005 21:02:50 -0800
>
> > There needs to be two rules:
> >
> > iff global memory critical flag is set
> > - allocate from the global critical receive
On Wed, Dec 14, 2005 at 09:23:09PM -0800, David S. Miller wrote:
> From: Matt Mackall <[EMAIL PROTECTED]>
> Date: Wed, 14 Dec 2005 21:02:50 -0800
>
> > There needs to be two rules:
> >
> > iff global memory critical flag is set
> > - allocate from the global critical receive pool on receive
> > -
David S. Miller wrote:
From: Matt Mackall <[EMAIL PROTECTED]>
Date: Wed, 14 Dec 2005 21:02:50 -0800
There needs to be two rules:
iff global memory critical flag is set
- allocate from the global critical receive pool on receive
- return packet to global pool if not destined for a socket with
On Wed, Dec 14, 2005 at 08:30:23PM -0800, David S. Miller wrote:
> From: Matt Mackall <[EMAIL PROTECTED]>
> Date: Wed, 14 Dec 2005 19:39:37 -0800
>
> > I think we need a global receive pool and per-socket send pools.
>
> Mind telling everyone how you plan to make use of the global receive
> pool
From: Matt Mackall <[EMAIL PROTECTED]>
Date: Wed, 14 Dec 2005 21:02:50 -0800
> There needs to be two rules:
>
> iff global memory critical flag is set
> - allocate from the global critical receive pool on receive
> - return packet to global pool if not destined for a socket with an
> attached s
On Wed, Dec 14, 2005 at 08:30:23PM -0800, David S. Miller wrote:
> From: Matt Mackall <[EMAIL PROTECTED]>
> Date: Wed, 14 Dec 2005 19:39:37 -0800
>
> > I think we need a global receive pool and per-socket send pools.
>
> Mind telling everyone how you plan to make use of the global receive
> pool
Addedd CC: to the [EMAIL PROTECTED] mailing list.
On Wed, 14 Dec 2005, jamal wrote:
On Wed, 2005-14-12 at 16:48 -0800, David S. Miller wrote:
Please have a look at:
http://bugzilla.kernel.org/show_bug.cgi?id=4952
It should look familiar.
It is - the soup nazi got involved on that
From: Matt Mackall <[EMAIL PROTECTED]>
Date: Wed, 14 Dec 2005 19:39:37 -0800
> I think we need a global receive pool and per-socket send pools.
Mind telling everyone how you plan to make use of the global receive
pool when the allocation happens in the device driver and we have no
idea which sock
On Wed, Dec 14, 2005 at 09:55:45AM -0800, Sridhar Samudrala wrote:
> On Wed, 2005-12-14 at 10:22 +0100, Andi Kleen wrote:
> > > I would appreciate any feedback or comments on this approach.
> >
> > Maybe I'm missing something but wouldn't you need an own critical
> > pool (or at least reservation)
On Thu, 2005-12-15 at 00:07 +0100, Aritz Bastida wrote:
> > >rx_threshold_hit
> > Rx max coalescing frames threshold hit.
>
> Well, I didn't understand what is this threshold for
>
This counter counts the number of times rx packets have reached the "max
rx coalesced frames" setting before a
On Wed, 2005-14-12 at 21:15 -0500, Patrick McManus wrote:
> David S. Miller wrote:
> > From: John Ronciak <[EMAIL PROTECTED]>
> > Date: Wed, 7 Dec 2005 11:48:46 -0800
> >
> >> Copybreak probably shouldn't be used in routing use cases.
> >
> > I think even this is arguable, routers route a lot mor
On Wed, 2005-14-12 at 16:48 -0800, David S. Miller wrote:
> Please have a look at:
>
>http://bugzilla.kernel.org/show_bug.cgi?id=4952
>
> It should look familiar.
It is - the soup nazi got involved on that bug ;->
http://marc.theaimsgroup.com/?l=linux-netdev&m=113070963711648&w=2
> We
David S. Miller wrote:
From: John Ronciak <[EMAIL PROTECTED]>
Date: Wed, 7 Dec 2005 11:48:46 -0800
Copybreak probably shouldn't be used in routing use cases.
I think even this is arguable, routers route a lot more than
small 64-byte frames. Unfortunately, that is what everyone
uses for packe
James Courtier-Dutton wrote:
> When I had the conversation with Matt at KS, the problem we were trying
> to solve was "Memory pressure with network attached swap space".
s/swap space/writable filesystems/
You can hit these problems even if you have no swap. Too much of the
memory becomes filled
Please have a look at:
http://bugzilla.kernel.org/show_bug.cgi?id=4952
It should look familiar.
We were discussing this in depth a few weeks ago, but the
discussion tailed off and I don't know how close we came
to a consensus or what that consensus might be :-)
The crux of the matter, t
From: Stephen Hemminger <[EMAIL PROTECTED]>
Date: Tue, 13 Dec 2005 16:57:00 -0800
> Receiving VLAN packets over a device (without VLAN assist) that is
> doing hardware checksumming (CHECKSUM_HW), causes errors because the
> VLAN code forgets to adjust the hardware checksum.
>
> Signed-off-by: Ste
Change the speed settings doesn't need to cause link to go down/up.
It can be handled by doing the same logic as nway_reset.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.orig/drivers/net/skge.c
+++ skge-2.6/drivers/net/skge.c
@@ -88,15 +88,14 @@ MODULE_DEVICE_TABLE(pci, skge_
--
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Don't need to keep Yukon-2 related definitions around for Skge
driver that is only for Yukon-1 and Genesis.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.orig/drivers/net/skge.h
+++ skge-2.6/drivers/net/skge.h
@@ -475,18 +475,6 @@ enum {
Q_T2= 0x40, /* 32 bit
Change the pause settings doesn't need to cause link to go down/up.
It can be handled by the phy_reset code.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.orig/drivers/net/skge.c
+++ skge-2.6/drivers/net/skge.c
@@ -504,10 +504,8 @@ static int skge_set_pauseparam(struct ne
Enough changes for one version.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.orig/drivers/net/skge.c
+++ skge-2.6/drivers/net/skge.c
@@ -43,7 +43,7 @@
#include "skge.h"
#define DRV_NAME "skge"
-#define DRV_VERSION"1.2"
+#define DRV_VERSION
If changing ring parameters is unable to allocate memory, we need
to return an error and take the device down.
Fixes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=5715
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.orig/drivers/net/skge.c
+++ skge-2.6/drivers/net/skge.c
@@ -
On Wed, 2005-12-14 at 14:39 -0800, Ben Greear wrote:
> James Courtier-Dutton wrote:
>
> > Have you actually thought about what would happen in a real world senario?
> > There is no real world requirement for this sort of user land feature.
> > In memory pressure mode, you don't care about user app
Changing the MTU size causes the receiver to have to reallocate buffers.
If this allocation fails, then we need to return an error, and take
the device offline. It can then be brought back up or reconfigured
for a smaller MTU.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- skge-2.6.ori
From: Hoerdt Mickael <[EMAIL PROTECTED]>
Date: Wed, 14 Dec 2005 23:38:56 +0100
> As implemented now, the default memory allocated in net.core.optmem_max
> permit
> to join up to 320 (S,G) channels per sockets (for IPv6, each channels
> cost 32bytes in
> net.core.optmem_max), thing is that net.ip
Index: bic-2.6/include/linux/bitops.h
===
--- bic-2.6.orig/include/linux/bitops.h
+++ bic-2.6/include/linux/bitops.h
@@ -76,6 +76,15 @@ static __inline__ int generic_fls(int x)
*/
#include
+
+static inline int generic_fls64(__u64
Here's the answer I gave before, in patch form (64, instead of 256). :-)
I'd like to see (or do) some measurements of scaling before
bumping that up too much. My $ 0.02, worth every penny! :-)
+-DLS
Signed-off-by: David L Stevens <[EMAIL PROTECTED]>
--- l
This set of patches:
* precomputes constants used in TCP cubic
* uses Newton/Raphson for cube root
* adds find largest set bit 64 to make initial estimate
--
Stephen Hemminger <[EMAIL PROTECTED]>
OSDL http://developer.osdl.org/~shemminger
-
To unsubscribe from this list: s
Revised version of patch to pre-compute values for TCP cubic.
* d32,d64 replaced with descriptive names
* cube_factor replaces
srtt[scaled by count] / HZ * ((1 << (10+2*BICTCP_HZ)) / bic_scale)
* beta_scale replaces
8*(BICTCP_BETA_SCALE+beta)/3/(BICTCP_BETA_SCALE-beta);
Sig
Index: net-2.6.16/include/asm-x86_64/bitops.h
===
--- net-2.6.16.orig/include/asm-x86_64/bitops.h
+++ net-2.6.16/include/asm-x86_64/bitops.h
@@ -340,6 +340,20 @@ static __inline__ unsigned long __ffs(un
return word;
}
+/*
+
Replace cube root algorithim with a faster version using Newton-Raphson.
Surprisingly, doing the scaled div64_64 is faster than a true 64 bit
division on 64 bit CPU's.
Signed-off-by: Stephen Hemminger <[EMAIL PROTECTED]>
--- net-2.6.16.orig/net/ipv4/tcp_cubic.c
+++ net-2.6.16/net/ipv4/tcp_cubic.
Oh - and re: policy - my 802.11 qdisc first calls out to the tc classify
function - allowing the sysadmin to do what he wants, then if no class
is selected it has a default implementation that reflects the
appropriate 802.11 and WiFi specs for classification.
Of course another implementation would
Hello again,
Sorry Michael, but I am a kind of newbie in this subject and couldnt
understand everything you said clearly. I'm working on my "final year
project" (I think it's said like that, I mean the project you do when
you finish your degree :P). The purpose of my project is to capture
and anal
Hi Jeremy,
I implemented this functionality in Devicescape's 802.11 stack.
The approach I took was for the driver to install a device specific
qdisc as the root qdisc on the device. This root qdisc's purpose is to
expose the hardware queues directly, so other qdiscs can be attached as
leaf qdiscs
James Courtier-Dutton wrote:
Have you actually thought about what would happen in a real world senario?
There is no real world requirement for this sort of user land feature.
In memory pressure mode, you don't care about user applications. In
fact, under memory pressure no user applications are
Hi david & all,
As implemented now, the default memory allocated in net.core.optmem_max
permit
to join up to 320 (S,G) channels per sockets (for IPv6, each channels
cost 32bytes in
net.core.optmem_max), thing is that net.ipv6.mld_max_msf is setting an
hard limit on it, so assuming that you don
Dave,
I tested these together, but let me know if you want me to
split these into a few pieces, though they'll probably conflict with
each other. :-)
The below "jumbo" patch fixes the following problems in MLDv2.
1) Add necessary "ntohs" to recent "pskb_may_pull" check [breaks
all
Sridhar Samudrala wrote:
On Wed, 2005-12-14 at 20:49 +, James Courtier-Dutton wrote:
Jesper Juhl wrote:
On 12/14/05, Sridhar Samudrala <[EMAIL PROTECTED]> wrote:
These set of patches provide a TCP/IP emergency communication mechanism that
could be used to guarantee high priority commun
On Wed, 2005-12-14 at 20:49 +, James Courtier-Dutton wrote:
> Jesper Juhl wrote:
> > On 12/14/05, Sridhar Samudrala <[EMAIL PROTECTED]> wrote:
> >
> >>These set of patches provide a TCP/IP emergency communication mechanism that
> >>could be used to guarantee high priority communications over a
This patch removes ths unused function xdr_decode_string().
Signed-off-by: Adrian Bunk <[EMAIL PROTECTED]>
Acked-by: Neil Brown <[EMAIL PROTECTED]>
Acked-by: Charles Lever <[EMAIL PROTECTED]>
---
include/linux/sunrpc/xdr.h |1 -
net/sunrpc/xdr.c | 21 -
2 fi
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Wed, 14 Dec 2005 23:16:29 +1100
> [GRE]: Fix hardware checksum modification
>
> The skb_postpull_rcsum introduced a bug to the checksum modification.
> Although the length pulled is offset bytes, the origin of the pulling
> is the GRE header, not the IP
Jesper Juhl wrote:
On 12/14/05, Sridhar Samudrala <[EMAIL PROTECTED]> wrote:
These set of patches provide a TCP/IP emergency communication mechanism that
could be used to guarantee high priority communications over a critical socket
to succeed even under very low memory conditions that last for
Jesper Juhl wrote:
To be a little serious, it sounds like something that could be used to
cause trouble and something that will lose its usefulness once enough
people start using it (for valid or invalid reasons), so what's the
point...
It could easily be a user-configurable option in an appli
Carl-Daniel Hailfinger <[EMAIL PROTECTED]> :
[...]
> Performance with nttcp was approximately at 135 MBit/s in
> both directions.
>
> Both cards were connected directly with a CAT5e cable.
> Enabling/disabling NAPI didn't have any measurable effect.
>
> Are these results expected, and if so, is t
On 12/14/05, Sridhar Samudrala <[EMAIL PROTECTED]> wrote:
>
> These set of patches provide a TCP/IP emergency communication mechanism that
> could be used to guarantee high priority communications over a critical socket
> to succeed even under very low memory conditions that last for a couple of
>
On Wed, 2005-12-14 at 19:38 +0100, Aritz Bastida wrote:
> Thank you for your email. But could you tell me what RFC specifically?
> Is it RFC1284? The counters I am looking for are:
>
These are custom counters not from any RFCs.
>dma_writeq_full
DMA write queue full - meaning host is not re
> It has a lot
> more users that compete true, but likely the set of GFP_CRITICAL users
> would grow over time too and it would develop the same problem.
No, because the critical set is determined by the user (by setting
the socket flag).
The receive side has some things marked as
Has anyone had a chance to review this patch and apply it? I would like
it to make 2.6.15 kernel since it is a bug related to TSO in the driver.
Thanks,
Ayaz
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
> Here we are assuming that the pre-allocated critical page pool is big enough
> to satisfy the requirements of all the critical sockets.
That seems like a lot of assumptions. Is it really better than the
existing GFP_ATOMIC which works basically the same? It has a lot
more users that compete tr
2005/12/14, Michael Chan <[EMAIL PROTECTED]>:
> On Wed, 2005-12-14 at 17:56 +0100, Aritz Bastida wrote:
>
> > How can I find the specs for the Tulip3 NIC?
> >
> Most of the statistics counters follow the MIB definitions in the RFCs.
> There are a few that are non-standard but should be self-explana
Sridhar Samudrala wrote:
> The only reason i made these macros is that i would expect this to a compile
> time configurable option so that there is zero overhead for regular users.
>
> #ifdef CONFIG_CRIT_SOCKET
> #define SK_CRIT_ALLOC(sk, flags) ((sk->sk_allocation & __GFP_CRITICAL) |
> flags)
>
On Wed, 2005-12-14 at 04:12 -0800, Mitchell Blank Jr wrote:
> Alan Cox wrote:
> > But your user space that would add the routes is not so protected so I'm
> > not sure this is actually a solution, more of an extended fudge.
>
> Yes, there's no 100% solution -- no matter how much memory you reserve
On Wed, 2005-12-14 at 11:17 +, Alan Cox wrote:
> On Mer, 2005-12-14 at 01:12 -0800, Sridhar Samudrala wrote:
> > Pass __GFP_CRITICAL flag with all allocation requests that are critical.
> > - All allocations needed to process incoming packets are marked as CRITICAL.
> > This includes the allo
On Wed, 2005-12-14 at 10:22 +0100, Andi Kleen wrote:
> > I would appreciate any feedback or comments on this approach.
>
> Maybe I'm missing something but wouldn't you need an own critical
> pool (or at least reservation) for each socket to be safe against deadlocks?
>
> Otherwise if a critical s
On Wed, 2005-12-14 at 17:56 +0100, Aritz Bastida wrote:
> How can I find the specs for the Tulip3 NIC?
>
Most of the statistics counters follow the MIB definitions in the RFCs.
There are a few that are non-standard but should be self-explanatory.
Send me an email if you need more information on s
Michael Tokarev wrote:
[..]
> So the question is: is the setup like this one supposed to work at all
> in linux?
>
> I know there are other "less ugly" ways to achieve the same effect, eg
> by using GRE/IPIP tunnels and incapsulating the traffic into IPSEC (this
> way, we'll have only one transpo
jamal writes:
> Essentially the approach would be the same as Robert's old recycle patch
> where he doesnt recycle certain skbs - the only difference being in the
> case of forwarding, the recycle is done asynchronously at EOT whereas
> this is done synchronously upon return from host path.
Hello,
I've been reading the source code for the tg3 module (Broadcom Tigon3
Ethernet card) in the Linux kernel. Specifically, I need to access the
NIC specific statistics, since I have to measure the performance of a
server under heavy network loads. Althought the statistics exported
with ethtool
Bernd Eckenfels wrote:
> Al Boldi wrote:
> > The current ip / ifconfig configuration is arcane and inflexible. The
> > reason being, that they are based on design principles inherited from
> > the last century.
>
> Yes I agree, however note that some of the asumptions are backed up and
> required
Herbert Xu wrote:
> Thanks. It turns out to be a bug in the GRE layer. I added that
> bug when I introduced skb_postpull_rcsum.
>
> [GRE]: Fix hardware checksum modification
>
> The skb_postpull_rcsum introduced a bug to the checksum modification.
> Although the length pulled is offset bytes, the
Mitchell Blank Jr wrote:
> Alan Cox wrote:
> > > +#define SK_CRIT_ALLOC(sk, flags) ((sk->sk_allocation & __GFP_CRITICAL) |
> > > flags)
> >
> > Lots of hidden conditional logic on critical paths.
>
> How expensive is it compared to the allocation itself?
Cost is readability here. You should ope
> can any1 point me to a good linux memory management stuff. Actually i
> want to know the conversion of virtual to physical address and when u
> need to do it.
The DMA chapter of LDD3 covers this topic in detail:
http://lwn.net/Kernel/LDD3/
jon
Jonathan Corbet
Executive editor, LWN.net
On Tue, Dec 13, 2005 at 06:30:38AM +, Paul Erkkila wrote:
>
> GRE tunnel.
>
> ip tunnel:
> tunnel0: gre/ip remote xx.xx.xx.xx local xx.xx.xx.xx ttl 255 key
> xx.xx.xx.xx
> Checksum in received packet is required.
> Checksum output packets.
Thanks. It turns out to be a bug in the GRE
Hi,
I was using UML 2.6.13.1 to listen RTMGRP_IPV6_ADDR netlink group, but
after updating to UML 2.6.14.3, I no longer received any messages. I also
tested "ip -6 monitor addr", but it remained silent as well.
However, "ip -4 monitor addr" works ok.
Checking the linux/rtnetlink.h, it seems li
Alan Cox wrote:
> But your user space that would add the routes is not so protected so I'm
> not sure this is actually a solution, more of an extended fudge.
Yes, there's no 100% solution -- no matter how much memory you reserve and
how many paths you protect if you try hard enough you can come up
David S. Miller wrote:
> From: Stephen Hemminger <[EMAIL PROTECTED]>
> Date: Mon, 12 Dec 2005 12:03:22 -0800
>
>
>>-d32 = d32 / HZ;
>>-
>> /* (wmax-cwnd) * (srtt>>3 / HZ) / c * 2^(3*bictcp_HZ) */
>>-d64 = (d64 * dist * d32) >> (count+3-BICTCP_HZ);
>>-
>>-/* cubic
On Mer, 2005-12-14 at 01:12 -0800, Sridhar Samudrala wrote:
> Pass __GFP_CRITICAL flag with all allocation requests that are critical.
> - All allocations needed to process incoming packets are marked as CRITICAL.
> This includes the allocations
> - made by the driver to receive incoming pac
> I would appreciate any feedback or comments on this approach.
Maybe I'm missing something but wouldn't you need an own critical
pool (or at least reservation) for each socket to be safe against deadlocks?
Otherwise if a critical sockets needs e.g. 2 pages to finish something
and 2 critical sock
These set of patches provide a TCP/IP emergency communication mechanism that
could be used to guarantee high priority communications over a critical socket
to succeed even under very low memory conditions that last for a couple of
minutes. It uses the critical page pool facility provided by Matt's
When 'system_in_emergency' flag is set, drop any incoming packets that belong
to non-critical sockets as soon as can determine the destination socket. This
is necessary to prevent incoming non-critical packets to consume memory from
critical page pool.
--
Introduce a new socket option SO_CRITICAL to mark a socket as critical.
This socket option takes a integer boolean flag that can be set using
setsockopt() and read with getsockopt().
---
include/asm-i386/socket.h|2 ++
in
73 matches
Mail list logo