On Tue, Apr 12, 2022 at 02:33:05PM +0100, Robin Murphy wrote: >On 12/04/2022 12:38 pm, Chao Gao wrote: >> When we looked into FIO performance with swiotlb enabled in VM, we found >> swiotlb_bounce() is always called one more time than expected for each DMA >> read request. >> >> It turns out that the bounce buffer is copied to original DMA buffer twice >> after the completion of a DMA request (one is done by in >> dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()). >> But the content in bounce buffer actually doesn't change between the two >> rounds of copy. So, one round of copy is redundant. >> >> Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to >> skip the memory copy in it. > >It's still a little suboptimal and non-obvious to call into SWIOTLB twice >though - even better might be for SWIOTLB to call arch_sync_dma_for_cpu() at >the appropriate place internally,
Hi Robin, dma_direct_sync_single_for_cpu() also calls arch_sync_dma_for_cpu_all() and arch_dma_mark_clean() in some cases. if SWIOTLB does sync internally, should these two functions be called by SWIOTLB? Personally, it might be better if swiotlb can just focus on bounce buffer alloc/free. Adding more DMA coherence logic into swiotlb will make it a little complicated. How about an open-coded version of dma_direct_sync_single_for_cpu in dma_direct_unmap_page with swiotlb_sync_single_for_cpu replaced by swiotlb_tbl_unmap_single? _______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
