Hi Jepser, 

> On Tue, 25 Jun 2019 18:06:18 +0300
> Ilias Apalodimas <ilias.apalodi...@linaro.org> wrote:
> 
> > @@ -1059,7 +1059,23 @@ static void netsec_setup_tx_dring(struct netsec_priv 
> > *priv)
> >  static int netsec_setup_rx_dring(struct netsec_priv *priv)
> >  {
> >     struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> > -   int i;
> > +   struct page_pool_params pp_params = { 0 };
> > +   int i, err;
> > +
> > +   pp_params.order = 0;
> > +   /* internal DMA mapping in page_pool */
> > +   pp_params.flags = PP_FLAG_DMA_MAP;
> > +   pp_params.pool_size = DESC_NUM;
> > +   pp_params.nid = cpu_to_node(0);
> > +   pp_params.dev = priv->dev;
> > +   pp_params.dma_dir = DMA_FROM_DEVICE;
> 
> I was going to complain about this DMA_FROM_DEVICE, until I noticed
> that in next patch you have:
> 
>  pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
True. Since the first patch only adds page_pool support, i wanted to be clear
that DMA_BIDIRECTIONAL is only needed for XDP use cases (and especially XDP_TX)

> 
> Making a note here to help other reviewers.
Thanks

> 
> > +   dring->page_pool = page_pool_create(&pp_params);
> > +   if (IS_ERR(dring->page_pool)) {
> > +           err = PTR_ERR(dring->page_pool);
> > +           dring->page_pool = NULL;
> > +           goto err_out;
> > +   }
> >  

Cheers
/Ilias

Reply via email to