On Tue, 2005-06-12 at 23:33 -0700, Grant Grundler wrote:
> On Tue, Dec 06, 2005 at 06:08:35PM -0500, jamal wrote:

> > All load goes onto CPU#0. I didnt try to tune or balance anything
> > so the numbers could be better than those noted here
> 
> ok - that's fair. I suspect the hyperthreading case is one where
> binding the IRQs to particule "CPUs" is necessary to reproduce
> the results.
> 


Note: I didnt bind anything. The p4/xeon starts with routing everything
to CPU#0 - i just left it like that. I am taking it this is what you
were asking for earlier?

> > All tests send exactly 10M packets burst at wire rate (1.488 MPps) on
> > each interface.
> > 4 runs are made; pick the last 3 of 4.
> > 
> > Results:
> > --------
> > 
> > kernel 2.6.11.7: 446Kpps
> > kernel 2.6.14: 452kpps
> > kernel 2.6.14 with e1000-6.2.15: 470Kpps
> > Kernel 2.6.14 with e1000-6.2.15 but rx copybreak commented out: 460Kpps
> > 
> > conclusion:
> > -----------
> > 
> > One could draw the conclusion that copybreak is good or prefetch is bad.
> 
> Like Robert, I conclude that both helped in this case.
> 

eh? Are we looking at the same results? Robert's conclusion: copybreak
_bad_, _some_ prefetch good. Mine so far is inconclusive although one
could almost say copybreak good - which is the opposite of what Robert
concluded.

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to