On Wed, Jun 24, 2020 at 03:06:10PM -0600, Jason A. Donenfeld wrote:
> Hi Alexander,
> 
> This patch introduced a behavior change around GRO_DROP:
> 
> napi_skb_finish used to sometimes return GRO_DROP:
> 
> > -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb)
> > +static gro_result_t napi_skb_finish(struct napi_struct *napi,
> > +                               struct sk_buff *skb,
> > +                               gro_result_t ret)
> >  {
> >     switch (ret) {
> >     case GRO_NORMAL:
> > -           if (netif_receive_skb_internal(skb))
> > -                   ret = GRO_DROP;
> > +           gro_normal_one(napi, skb);
> >
> 
> But under your change, gro_normal_one and the various calls that makes
> never propagates its return value, and so GRO_DROP is never returned to
> the caller, even if something drops it.
> 
> Was this intentional? Or should I start looking into how to restore it?
> 
> Thanks,
> Jason

For some context, I'm consequently mulling over this change in my code,
since checking for GRO_DROP now constitutes dead code:

diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
index 91438144e4f7..9b2ab6fc91cd 100644
--- a/drivers/net/wireguard/receive.c
+++ b/drivers/net/wireguard/receive.c
@@ -414,14 +414,8 @@ static void wg_packet_consume_data_done(struct wg_peer 
*peer,
        if (unlikely(routed_peer != peer))
                goto dishonest_packet_peer;

-       if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) {
-               ++dev->stats.rx_dropped;
-               net_dbg_ratelimited("%s: Failed to give packet to userspace 
from peer %llu (%pISpfsc)\n",
-                                   dev->name, peer->internal_id,
-                                   &peer->endpoint.addr);
-       } else {
-               update_rx_stats(peer, message_data_len(len_before_trim));
-       }
+       napi_gro_receive(&peer->napi, skb);
+       update_rx_stats(peer, message_data_len(len_before_trim));
        return;

 dishonest_packet_peer:

Reply via email to