On 12/7/20 2:06 PM, Boris Pismenny wrote:
> +struct sk_buff*
> +mlx5e_nvmeotcp_handle_rx_skb(struct net_device *netdev, struct sk_buff *skb,
> + struct mlx5_cqe64 *cqe, u32 cqe_bcnt,
> + bool linear)
> +{
> + int ccoff, cclen, hlen, ccid, remain
From: Ben Ben-ishay
NVMEoTCP direct data placement constructs an SKB from each CQE, while
pointing at NVME buffers.
This enables the offload, as the NVMe-TCP layer will skip the copy when
src == dst.
Signed-off-by: Boris Pismenny
Signed-off-by: Ben Ben-Ishay
Signed-off-by: Or Gerlitz
Signed-