Hi,

On Sun, 2019-03-24 at 17:56 +0100, Felix Fietkau wrote:
> Since we're freeing multiple skbs, we might as well use bulk free to save a
> few cycles. Use the same conditions for bulk free as in napi_consume_skb.
> 
> Signed-off-by: Felix Fietkau <n...@nbd.name>
> ---
> v2: call kmem_cache_free_bulk once the skb array is full instead of
>     falling back to kfree_skb
>  net/core/skbuff.c | 40 ++++++++++++++++++++++++++++++++++++----
>  1 file changed, 36 insertions(+), 4 deletions(-)
> 
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 2415d9cb9b89..1eeaa264d2a4 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -666,12 +666,44 @@ EXPORT_SYMBOL(kfree_skb);
>  
>  void kfree_skb_list(struct sk_buff *segs)
>  {
> -     while (segs) {
> -             struct sk_buff *next = segs->next;
> +     struct sk_buff *next = segs;
> +     void *skbs[16];
> +     int n_skbs = 0;
>  
> -             kfree_skb(segs);
> -             segs = next;
> +     while ((segs = next) != NULL) {
> +             next = segs->next;
> +
> +             if (!skb_unref(segs))
> +                     continue;
> +
> +             if (fclone != SKB_FCLONE_UNAVAILABLE) {
> +                     kfree_skb(segs);
> +                     continue;
> +             }

I think you should swap the order of skb_unref() and the above check,
or skbs with 'fclone != SKB_FCLONE_UNAVAILABLE' will go twice in
skb_unref() (kfree_skb() calls skb_unref(), too).

Other than that LGTM,

Thanks,

Paolo

Reply via email to