Jesse Brandeburg wrote:

included netdev...

On 11/8/06, Jeff V. Merkey <[EMAIL PROTECTED]> wrote:


Is there a good reason the skb refill routine in e1000_alloc_rx_buffers
needs to go and touch and remap skb memory
on already loaded descriptors/  This seems extremely wasteful of
processor cycles when refilling the ring buffer.

I note that the archtiecture has changed and is recycling buffers from
the rx_irq routine and when the routine is called
to refill the ring buffers, a lot of wasteful and needless calls for
map_skb is occurring.


we have to unmap the descriptor (or at least do
pci_dma_sync_single_for_cpu / pci_dma_sync_single_for_device) because
the dma API says we can't be guaranteed the cacheable memory is
consistent until we do one of the afore mentioned pci dma ops.

In the case I am referring to, the memory is already mapped with a previous call, which means it may be getting
mapped twice.

Jeff


we have to do *something* before we access it.  Simplest path is to
unmap it and then recycle/map it.

If you can show that it is faster to use pci_dma_sync_single_for_cpu
and friends I'd be glad to take a patch.

Hope this helps,
 Jesse
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to