Arnd Bergmann wrote:
This is basically ok, but then I think you should pass GFP_DMA
or GFP_DMA32 to all allocations that the driver does after
the 64-bit mask fails, otherwise you get a significant overhead
in the bounce buffers.

Well, for starters, ZONE_DMA32 is the same as ZONE_NORMAL on ARM, because CONFIG_ZONE_DMA32 is not defined.

#ifdef CONFIG_ZONE_DMA32
#define OPT_ZONE_DMA32 ZONE_DMA32
#else
#define OPT_ZONE_DMA32 ZONE_NORMAL
#endif

(I wonder if this should say instead:

#ifdef CONFIG_ZONE_DMA32
#define OPT_ZONE_DMA32 ZONE_DMA32
#else
#define OPT_ZONE_DMA32 ZONE_DMA  <----
#endif
)

However, I'm not sure where I should be using GFP_DMA anyway. Whenever the driver allocates memory for DMA, it uses dma_zalloc_coherent():

ring_header->v_addr = dma_zalloc_coherent(dev, ring_header->size,
                                         &ring_header->dma_addr,
                                         GFP_KERNEL);

and I don't think I need to pass GFP_DMA to dma_zalloc_coherent. Every other memory allocation is a kmalloc variant, but that's never for DMA, so that memory can be anywhere.

I found about 70 drivers that fall-back to 32-bit DMA if 64-bit fails. None of them do as you suggest. They all just set the mask to 64 or 32 and that's it.

Some drivers set NETIF_F_HIGHDMA if 64-bit DMA is enabled:

        if (pci_using_dac)
                netdev->features |= NETIF_F_HIGHDMA;

I could do this, but I think it has no meaning on ARM64 because it depends on CONFIG_HIGHMEM.

--
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm
Technologies, Inc.  Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.

Reply via email to