Evgeniy Polyakov a écrit : > On Sat, Sep 30, 2006 at 12:16:44AM +0200, Brice Goglin ([EMAIL PROTECTED]) > wrote: > >> Jeff Garzik a écrit : >> >>> Brice Goglin wrote: >>> >>>> The complete driver code in our CVS actually also supports high-order >>>> allocations instead of single physical pages since it significantly >>>> increase the performance. Order=2 allows us to receive standard frames >>>> at line rate even on low-end hardware such as an AMD Athlon(tm) 64 X2 >>>> Dual Core Processor 3800+ (2.0GHz). Some customer might not care a lot >>>> about memory fragmentation if the performance is better. >>>> >>>> But, since high-order allocations are generally considered a bad idea, >>>> we do not include the relevant code in the following patch for inclusion >>>> in Linux. Here, we simply pass order=0 to all page allocation routines. >>>> If necessary, I could drop the remaining reference to high-order >>>> (especially replace alloc_pages() with alloc_page()) but I'd rather >>>> keep it as is. >>>> >>>> If high-order allocations are ever considered OK under some circum- >>>> stances, we could send an additional patch (a module parameter would >>>> be used to switch from single physical pages to high-order pages). >>>> >> Any comments about what I was saying about high-order allocations above? >> > > It is quite strnage that you see very noticeble speed degradation after > switching from higher order to 0 order allocations, could specify where > observed bottleneck in network stack is? >
The bottleneck is not in the network stack, it is simply related to the number of page allocations that are required. Since we store multiple fragments in a same page, if MTU=1500, we need one 0-order allocation every 2 fragments, while we need one 2-order allocation every 8 fragments. IIRC, we observed about 20% higher throughput on the receive side when switching from order=0 to order=2 (7.5Gbit/s -> 9.3Gbit/s with roughly the same CPU usage). Brice - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html