On 02/10/17 17:31, Christoph Hellwig wrote: > On Mon, Oct 02, 2017 at 03:56:14PM +0100, Robin Murphy wrote: >> IIRC, dmabounce has to stay because it handles non-contiguous DMA masks, >> thanks to a StrongARM hardware erratum where the DMA controller could >> only access every other megabyte of memory (or something like that). > > Isn't that exactly the problem that xen-swiotlb solves?
As far as I can see, not quite - swiotlb-xen appears to handle buffers which span non-contiguous host pages, but still assumes that DMA masks are of the full DMA_BIT_MASK() form such that checks like "addr <= mask" work. I suppose we *could* try just making the SWIOTLB users themselves robust against incomplete masks (e.g. [2]) if they need to be, but I'm not sure whether that would actually work in practice. FWIW, I could have sworn I had an old mail from Russell about this but that seems to have vanished - after a bit more digging it's actually about avoiding a particular address line on the SA-1111 as documented in [1], and set up in sa1111_configure_smc() (arch/arm/common/sa1111.c). Robin. [1] https://wwwcip.informatik.uni-erlangen.de/~simigern/jornada-7xx/docs/sa1111-errata.pdf [2] diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h index 0df756b24863..8db96d714517 100644 --- a/arch/arm64/include/asm/dma-mapping.h +++ b/arch/arm64/include/asm/dma-mapping.h @@ -66,10 +66,13 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) { + dma_addr_t mask; + if (!dev->dma_mask) return false; - return addr + size - 1 <= *dev->dma_mask; + mask = *dev->dma_mask; + return !(addr + size - 1 & ~mask) && (size - 1 <= mask ^ (mask + 1)); } static inline void dma_mark_clean(void *addr, size_t size) _______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
