On Tue, Jul 09, 2019 at 12:31:54PM -0400, Andy Gospodarek wrote:
> On Tue, Jul 09, 2019 at 06:20:57PM +0300, Ilias Apalodimas wrote:
> > Hi,
> > 
> > > > Add page_pool_destroy() in bnxt_free_rx_rings() during normal RX ring
> > > > cleanup, as Ilias has informed us that the following commit has been
> > > > merged:
> > > > 
> > > > 1da4bbeffe41 ("net: core: page_pool: add user refcnt and reintroduce 
> > > > page_pool_destroy")
> > > > 
> > > > The special error handling code to call page_pool_free() can now be
> > > > removed.  bnxt_free_rx_rings() will always be called during normal
> > > > shutdown or any error paths.
> > > > 
> > > > Fixes: 322b87ca55f2 ("bnxt_en: add page_pool support")
> > > > Cc: Ilias Apalodimas <ilias.apalodi...@linaro.org>
> > > > Cc: Andy Gospodarek <go...@broadcom.com>
> > > > Signed-off-by: Michael Chan <michael.c...@broadcom.com>
> > > > ---
> > > >  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 ++------
> > > >  1 file changed, 2 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c 
> > > > b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > index e9d3bd8..2b5b0ab 100644
> > > > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > @@ -2500,6 +2500,7 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
> > > >                 if (xdp_rxq_info_is_reg(&rxr->xdp_rxq))
> > > >                         xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > >  
> > > > +               page_pool_destroy(rxr->page_pool);
> > > >                 rxr->page_pool = NULL;
> > > >  
> > > >                 kfree(rxr->rx_tpa);
> > > > @@ -2560,19 +2561,14 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
> > > >                         return rc;
> > > >  
> > > >                 rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i);
> > > > -               if (rc < 0) {
> > > > -                       page_pool_free(rxr->page_pool);
> > > > -                       rxr->page_pool = NULL;
> > > > +               if (rc < 0)
> > > >                         return rc;
> > > > -               }
> > > >  
> > > >                 rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> > > >                                                 MEM_TYPE_PAGE_POOL,
> > > >                                                 rxr->page_pool);
> > > >                 if (rc) {
> > > >                         xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > > -                       page_pool_free(rxr->page_pool);
> > > > -                       rxr->page_pool = NULL;
> > > 
> > > Rather than deleting these lines it would also be acceptable to do:
> > > 
> > >                 if (rc) {
> > >                         xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > -                       page_pool_free(rxr->page_pool);
> > > +                       page_pool_destroy(rxr->page_pool);
> > >                         rxr->page_pool = NULL;
> > >                         return rc;
> > >                 }
> > > 
> > > but anytime there is a failure to bnxt_alloc_rx_rings the driver will
> > > immediately follow it up with a call to bnxt_free_rx_rings, so
> > > page_pool_destroy will be called.
> > > 
> > > Thanks for pushing this out so quickly!
> > > 
> > 
> > I also can't find page_pool_release_page() or page_pool_put_page() called 
> > when
> > destroying the pool. Can you try to insmod -> do some traffic -> rmmod ?
> > If there's stale buffers that haven't been unmapped properly you'll get a
> > WARN_ON for them.
> 
> I did that test a few times with a few different bpf progs but I do not
> see any WARN messages.  Of course this does not mean that the code we
> have is 100% correct.
> 

I'll try to have a closer look as well

> Presumably you are talking about one of these messages, right?
> 
> 215         /* The distance should not be able to become negative */
> 216         WARN(inflight < 0, "Negative(%d) inflight packet-pages", 
> inflight);
> 
> or
> 
> 356         /* Drivers should fix this, but only problematic when DMA is used 
> */
> 357         WARN(1, "Still in-flight pages:%d hold:%u released:%u",
> 358              distance, hold_cnt, release_cnt);
> 

Yea particularly the second one. There's a counter we increase everytime you
alloc a fresh page which needs to be decresed before freeing the whole pool.
page_pool_release_page will do that for example

> 
> > This part was added later on in the API when Jesper fixed in-flight packet
> > handling

Thanks
/Ilias

Reply via email to