Tom Herbert <t...@herbertland.com> wrote on Tue [2017-Feb-21 15:27:54 -0800]: > On Tue, Feb 21, 2017 at 1:09 PM, Felix Manlunas > <felix.manlu...@cavium.com> wrote: > > From: VSR Burru <veerasenareddy.bu...@cavium.com> > > > > Improve UDP TX performance by: > > * reducing the ring size from 2K to 512 > > It looks like liquidio supports BQL. Is that not effective here?
Response from our colleague, VSR: That's right, BQL is not effective here. We reduced the ring size because there is heavy overhead with dma_map_single every so often. With iommu=on, dma_map_single in PF Tx data path was taking longer time (~700usec) for every ~250 packets. Debugged intel_iommu code, and found that PF driver is utilizing too many static IO virtual address mapping entries (for gather list entries and info buffers): about 100K entries for two PF's each using 8 rings. Also, finding an empty entry (in rbtree of device domain's iova mapping in kernel) during Tx path becomes a bottleneck every so often; the loop to find the empty entry goes through over 40K iterations; this is too costly and was the major overhead. Overhead is low when this loop quits quickly.