On a Xen PV guest the DMA addresses and physical addresses are not 1:1
(such as Xen PV guests) and the generic dma_get_required_mask() does
not return the correct mask (since it uses max_pfn).

Some device drivers (such as mptsas, mpt2sas) use
dma_get_required_mask() to set the device's DMA mask to allow them to
use only 32-bit DMA addresses in hardware structures.  This results in
unnecessary use of the SWIOTLB if DMA addresses are more than 32-bits,
impacting performance significantly.

We could base the DMA mask on the maximum MFN but:

a) The hypercall op to get the maximum MFN (XENMEM_maximum_ram_page)
will truncate the result to an int in 32-bit guests.

b) Future uses of the IOMMU in Xen may map frames at bus addresses
above the end of RAM.

So, just assume a 64-bit DMA mask is always required.

Signed-off-by: David Vrabel <[email protected]>
---
 arch/x86/xen/pci-swiotlb-xen.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 0e98e5d..35774f8 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -18,6 +18,11 @@
 
 int xen_swiotlb __read_mostly;
 
+static u64 xen_swiotlb_get_required_mask(struct device *dev)
+{
+       return DMA_BIT_MASK(64);
+}
+
 static struct dma_map_ops xen_swiotlb_dma_ops = {
        .mapping_error = xen_swiotlb_dma_mapping_error,
        .alloc = xen_swiotlb_alloc_coherent,
@@ -31,6 +36,7 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
        .map_page = xen_swiotlb_map_page,
        .unmap_page = xen_swiotlb_unmap_page,
        .dma_supported = xen_swiotlb_dma_supported,
+       .get_required_mask = xen_swiotlb_get_required_mask,
 };
 
 /*
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to