On Fri, 25 Sep 2020 12:01:32 +0200
Lorenzo Bianconi <lore...@kernel.org> wrote:

> Try to recycle the xdp tx buffer into the in-irq page_pool cache if
> mvneta_txq_bufs_free is executed in the NAPI context.

NACK - I don't think this is safe.  That is also why I named the
function postfix rx_napi.  The page pool->alloc.cache is associated
with the drivers RX-queue.  The xdp_frame's that gets freed could be
coming from a remote driver that use page_pool. This remote drivers
RX-queue processing can run concurrently on a different CPU, than this
drivers TXQ-cleanup.

If you want to speedup this, I instead suggest that you add a
xdp_return_frame_bulk API.


> Signed-off-by: Lorenzo Bianconi <lore...@kernel.org>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> b/drivers/net/ethernet/marvell/mvneta.c
> index 14df3aec285d..646fbf4ed638 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -1831,7 +1831,7 @@ static struct mvneta_tx_queue 
> *mvneta_tx_done_policy(struct mvneta_port *pp,
>  /* Free tx queue skbuffs */
>  static void mvneta_txq_bufs_free(struct mvneta_port *pp,
>                                struct mvneta_tx_queue *txq, int num,
> -                              struct netdev_queue *nq)
> +                              struct netdev_queue *nq, bool napi)
>  {
>       unsigned int bytes_compl = 0, pkts_compl = 0;
>       int i;
> @@ -1854,7 +1854,10 @@ static void mvneta_txq_bufs_free(struct mvneta_port 
> *pp,
>                       dev_kfree_skb_any(buf->skb);
>               } else if (buf->type == MVNETA_TYPE_XDP_TX ||
>                          buf->type == MVNETA_TYPE_XDP_NDO) {
> -                     xdp_return_frame(buf->xdpf);
> +                     if (napi)
> +                             xdp_return_frame_rx_napi(buf->xdpf);
> +                     else
> +                             xdp_return_frame(buf->xdpf);
>               }
>       }
>  
> @@ -1872,7 +1875,7 @@ static void mvneta_txq_done(struct mvneta_port *pp,
>       if (!tx_done)
>               return;
>  
> -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, true);
>  
>       txq->count -= tx_done;
>  
> @@ -2859,7 +2862,7 @@ static void mvneta_txq_done_force(struct mvneta_port 
> *pp,
>       struct netdev_queue *nq = netdev_get_tx_queue(pp->dev, txq->id);
>       int tx_done = txq->count;
>  
> -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, false);
>  
>       /* reset txq */
>       txq->count = 0;



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Reply via email to