On 2018/09/11 20:07, Toshiaki Makita wrote:
> On 2018/09/11 19:27, Eric Dumazet wrote:
> ...
>> Fix would probably be :
>>
>> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>> index
>> 8d679c8b7f25c753d77cfb8821d9d2528c9c9048..96bd94480942b469403abf017f9f9d5be1e23ef5
>> 100644
>> --- a/drivers/net/veth.c
>> +++ b/drivers/net/veth.c
>> @@ -602,9 +602,10 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
>> unsigned int *xdp_xmit)
>> skb = veth_xdp_rcv_skb(rq, ptr, xdp_xmit);
>> }
>>
>> - if (skb)
>> + if (skb) {
>> + skb_orphan(skb);
>> napi_gro_receive(&rq->xdp_napi, skb);
>> -
>> + }
>> done++;
>> }
>
> Considering commit 9c4c3252 ("skbuff: preserve sock reference when
> scrubbing the skb.") I'm not sure if we should unconditionally orphan
> the skb here.
> I was thinking I should call netif_receive_skb() for such packets
> instead of napi_gro_receive().
I tested TCP throughput within localhost with XDP enabled (with
skb_orphan() fix).
GRO off: 4.7 Gbps
GRO on : 6.7 Gbps
Since there is not-so-small difference, I'm making a patch which orphan
the skb as Eric suggested (but in veth_xdp_rcv_skb() instead).
Thanks!
--
Toshiaki Makita