On Fri, Jun 26, 2015 at 12:31 PM, Ramu Ramamurthy <srama...@linux.vnet.ibm.com> wrote: > On 2015-06-26 11:04, Tom Herbert wrote: >>> >>> I am testing the simplest configuration which has 1 TCP flow generated by >>> iperf from >>> a VM connected to a linux bridge with a vxlan tunnel interface. The 10G >>> nic >>> (82599 ES) has >>> multiple receive queues, but in this simple test, it is likely immaterial >>> (because, the >>> tuple on which it hashes would be fixed). The real difference in >>> performance >>> appears to >>> be whether or not vxlan gro is performed by software. >>> >> >> Please do "ethtool -k vxlan0" of whatever interface is for vxlan. >> Ensure GRO is "on", if not enable it on the interface by "ethtool _k >> vxlan0 gro on". Run iperf and to tcpdump on the vxlan interface to >> verify GRO is being done. If we are seeing performance degradation >> when GRO is being done at tunnel versus device that would be a >> different problem than no GRO being done at all. > > > Heres more details on the test. > > gro is "on" on the device and the tunnel. tcpdump on the vxlan interface > show un-aggregated packets > > [root@ramu1 tracing]# tcpdump -i vxlan0 > <snip> > ptions [nop,nop,TS val 1972850548 ecr 193703], length 1398 > 14:14:38.911955 IP 1.1.1.21.44134 > 1.1.1.11.commplex-link: Flags [.], seq > 224921449:224922847, ack 1, win 221, options [nop,nop,TS val 1972850548 ecr
Looks like GRO was never implemented for vxlan tunnels. The driver is simply calling netif_rx instead of using the GRO cells infrastructure. geneve is doing the same thing. For other tunnels which are used in foo-over-udp (GRE, IPIP, SIT) ip_tunnel_rcv is called which in turn calls gro_cells_receive. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html