Hi Ivan, Tariq, > >>>+ [...] > >> > >>What would you recommend to do for the following situation: > >> > >>Same receive queue is shared between 2 network devices. The receive ring is > >>filled by pages from page_pool, but you don't know the actual port (ndev) > >>filling this ring, because a device is recognized only after packet is > >>received. > >> > >>The API is so that xdp rxq is bind to network device, each frame has > >>reference > >>on it, so rxq ndev must be static. That means each netdev has it's own rxq > >>instance even no need in it. Thus, after your changes, page must be > >>returned to > >>the pool it was taken from, or released from old pool and recycled in > >>new one > >>somehow. > >> > >>And that is inconvenience at least. It's hard to move pages between > >>pools w/o > >>performance penalty. No way to use common pool either, as unreg_rxq now > >>drops > >>the pool and 2 rxqa can't reference same pool. > >> > > > >Within the single netdev, separate page_pool instances are anyway > >created for different RX rings, working under different NAPI's. > > The circumstances are so that same RX ring is shared between 2 > netdevs... and netdev can be known only after descriptor/packet is > received. Thus, while filling RX ring, there is no actual device, > but when packet is received it has to be recycled to appropriate > net device pool. Before this change there were no difference from > which pool the page was allocated to fill RX ring, as there were no > owner. After this change there is owner - netdev page pool. > > For cpsw the dma unmap is common for both netdevs and no difference > who is freeing the page, but there is difference which pool it's > freed to. Since 2 netdevs are sharing one queue you'll need locking right? (Assuming that the rx-irq per device can end up on a different core) We discussed that ideally page pools should be alocated per hardware queue. If you indeed need locking (and pay the performance penalty anyway) i wonder if there's anything preventing you from keeping the same principle, i.e allocate a pool per queue and handle the recycling to the proper ndev internally. That way only the first device will be responsible of allocating/recycling/maintaining the pool state.
> > So that, while filling RX ring the page is taken from page pool of > ndev1, but packet is received for ndev2, it has to be later > returned/recycled to page pool of ndev1, but when xdp buffer is > handed over to xdp prog the xdp_rxq_info has reference on ndev2 ... > > And no way to predict the final ndev before packet is received, so no > way to choose appropriate page pool as now it becomes page owner. > > So, while RX ring filling, the page/dma recycling is needed but should > be some way to identify page owner only after receiving packet. > > Roughly speaking, something like: > > pool->pages_state_hold_cnt++; > > outside of page allocation API, after packet is received. > and free of the counter while allocation (w/o owing the page). If handling it internally is not an option maybe we can sort something out for special devices Thanks /Ilias