On Fri,  6 Nov 2020 19:19:07 +0100
Lorenzo Bianconi <lore...@kernel.org> wrote:

> +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> +                        struct xdp_frame_bulk *bq)
> +{
> +     struct xdp_mem_info *mem = &xdpf->mem;
> +     struct xdp_mem_allocator *xa;
> +
> +     if (mem->type != MEM_TYPE_PAGE_POOL) {
> +             __xdp_return(xdpf->data, &xdpf->mem, false);
> +             return;
> +     }
> +
> +     rcu_read_lock();

This rcu_read_lock() shows up on my performance benchmarking, and is
unnecessary for the fast-path usage, as in most drivers DMA-TX
completion already holds the rcu_read_lock.

> +     xa = bq->xa;
> +     if (unlikely(!xa)) {
> +             xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> +             bq->count = 0;
> +             bq->xa = xa;
> +     }
> +
> +     if (bq->count == XDP_BULK_QUEUE_SIZE)
> +             xdp_flush_frame_bulk(bq);
> +
> +     if (mem->id != xa->mem.id) {
> +             xdp_flush_frame_bulk(bq);
> +             bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, 
> mem_id_rht_params);
> +     }
> +
> +     bq->q[bq->count++] = xdpf->data;
> +
> +     rcu_read_unlock();
> +}
> +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Reply via email to