------- Comment From sikri...@in.ibm.com 2017-11-20 04:13 EDT------- Hi Daniel,
When one enables largesend offload in AIX, MTU still stays at 1500 (when Jumbo frames are disabled) and 9000 (when Jumbo frames are enabled). So, when a workload pushes data > MTU sizes, AIX LPAR will send the bigger payload with an MSS value based on MTU. This MSS value is what the adapter uses later for segmenting. By default AIX doesn't set MTU to 64k when largesend is enabled. In the scenario described in bnx2x driver issue, I suspect the end-user manually changed the MTU to a bigger value (~64k or so), otherwise we shouldn't be seeing this issue. Now, coming back to your question on what will happen if user configures a bigger MTU value (say for example 64k), AIX LPAR will send the bigger payload to VIOS with MSS value ~64k, this will lead to physical NICs in VIOS drop the packet, leading to restransmissions from AIX LPAR. Eventually AIX LPAR will disable largesend offload for the specific connection, post certain number of retransmissions and then the data flow goes through fine. So, in the event of user misconfiguration, I agree there will be a performance impact. This issue may happen in non-virtualized environment too, when the end- user sets a higher MTU than the one supported by the physical adapter. Here the driver/adapter may drop the packet, leading to retransmissions. Regards, Siva K -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1692538 Title: Ubuntu 16.04.02: ibmveth: Support to enable LSO/CSO for Trunk VEA Status in The Ubuntu-power-systems project: In Progress Status in linux package in Ubuntu: Fix Released Status in linux source package in Xenial: In Progress Status in linux source package in Zesty: Fix Released Status in linux source package in Artful: Fix Released Bug description: == SRU Justification == Commit 66aa0678ef is request to fix four issues with the ibmveth driver. The issues are as follows: - Issue 1: ibmveth doesn't support largesend and checksum offload features when configured as "Trunk". - Issue 2: SYN packet drops seen at destination VM. When the packet originates, it has CHECKSUM_PARTIAL flag set and as it gets delivered to IO server's inbound Trunk ibmveth, on validating "checksum good" bits in ibmveth receive routine, SKB's ip_summed field is set with CHECKSUM_UNNECESSARY flag. - Issue 3: First packet of a TCP connection will be dropped, if there is no OVS flow cached in datapath. - Issue 4: ibmveth driver doesn't have support for SKB's with frag_list. The details for the fixes to these issues are described in the commits git log. == Comment: #0 - BRYANT G. LY <b...@us.ibm.com> - 2017-05-22 08:40:16 == ---Problem Description--- - Issue 1: ibmveth doesn't support largesend and checksum offload features when configured as "Trunk". Driver has explicit checks to prevent enabling these offloads. - Issue 2: SYN packet drops seen at destination VM. When the packet originates, it has CHECKSUM_PARTIAL flag set and as it gets delivered to IO server's inbound Trunk ibmveth, on validating "checksum good" bits in ibmveth receive routine, SKB's ip_summed field is set with CHECKSUM_UNNECESSARY flag. This packet is then bridged by OVS (or Linux Bridge) and delivered to outbound Trunk ibmveth. At this point the outbound ibmveth transmit routine will not set "no checksum" and "checksum good" bits in transmit buffer descriptor, as it does so only when the ip_summed field is CHECKSUM_PARTIAL. When this packet gets delivered to destination VM, TCP layer receives the packet with checksum value of 0 and with no checksum related flags in ip_summed field. This leads to packet drops. So, TCP connections never goes through fine. - Issue 3: First packet of a TCP connection will be dropped, if there is no OVS flow cached in datapath. OVS while trying to identify the flow, computes the checksum. The computed checksum will be invalid at the receiving end, as ibmveth transmit routine zeroes out the pseudo checksum value in the packet. This leads to packet drop. - Issue 4: ibmveth driver doesn't have support for SKB's with frag_list. When Physical NIC has GRO enabled and when OVS bridges these packets, OVS vport send code will end up calling dev_queue_xmit, which in turn calls validate_xmit_skb. In validate_xmit_skb routine, the larger packets will get segmented into MSS sized segments, if SKB has a frag_list and if the driver to which they are delivered to doesn't support NETIF_F_FRAGLIST feature. Contact Information = Bryant G. Ly/b...@us.ibm.com ---uname output--- 4.8.0-51.54 Machine Type = p8 ---Debugger--- A debugger is not configured ---Steps to Reproduce--- Increases performance greatly The patch has been accepted upstream: https://patchwork.ozlabs.org/patch/764533/ To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-power-systems/+bug/1692538/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp