Hello folks, I've observed a very interesting performance characteristic. Sometimes net_device drivers need to "do something to a packet" and then "send it through a tunnel". This looks like:
expensive_transformation(skb1); udp_tunnel_xmit_skb(skb1); expensive_transformation(skb2); udp_tunnel_xmit_skb(skb2); expensive_transformation(skb3); udp_tunnel_xmit_skb(skb3); expensive_transformation(skb4); udp_tunnel_xmit_skb(skb4); expensive_transformation(skb5); udp_tunnel_xmit_skb(skb5); It turns out, however, that we gain significant performance increases (300mbps on my laptop) by doing all the xmits in a row, like this: expensive_transformation(skb1); expensive_transformation(skb2); expensive_transformation(skb3); expensive_transformation(skb4); expensive_transformation(skb5); udp_tunnel_xmit_skb(skb1); udp_tunnel_xmit_skb(skb2); udp_tunnel_xmit_skb(skb3); udp_tunnel_xmit_skb(skb4); udp_tunnel_xmit_skb(skb5); Now practically speaking, it's not that hard to implement the latter more performant variant, as devices can simply opt in to receiving GSO super packets, and submit them all together in bunches of GSO-split packets. Implementation is not an issue. But this does leave me wondering why the performance is better this way. Possible ideas include something along the lines of NAPI polling for tx buffers at intervals, and the more full the buffer is, the better, as it means it won't need to do another poll examination later. But this is just a theory, and I really have no idea. I was wondering if anybody reads this message and thinks, "oh, duh, it's of course because of XYZ. You should always do ABC, and you can improve things further if you 123 too." I'd be quite interested anyhow. Thanks, Jason