There are two cases to consider for this exception:
> but others can't so continuing despite it is recipe for disaster. Perhaps
> your chip
> has some spurious Machine check exceptions ?
1. Except for core 0, which is running the linux os, all other cores are
running
packet processing code in
Switch to the generic noncoherent direct mapping implementation.
This removes the previous sync_single_for_device implementation, which
looks bogus given that no syncing is happening in the similar but more
important map_single case.
Signed-off-by: Christoph Hellwig
---
arch/sparc/Kconfig
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/nios2/Kconfig | 3 +
arch/nios2/include/asm/Kbuild| 1 +
arch/nios2/include/asm/dma-mapping.h | 20
arch/nios2/mm/dma-mapping.c | 139 +++---
And use it in the maple bus code to avoid a dma API dependency.
Signed-off-by: Christoph Hellwig
---
arch/sh/include/asm/cacheflush.h | 7 +++
arch/sh/mm/consistent.c | 6 +-
drivers/sh/maple/maple.c | 7 ---
3 files changed, 12 insertions(+), 8 deletions(-)
diff --
This is a slight change in behavior as we avoid the detour through the
virtual mapping for the coherent allocator, but if this CPU really is
coherent that should be the right thing to do.
Signed-off-by: Christoph Hellwig
---
arch/sh/Kconfig | 1 +
arch/sh/include/asm/dma-mappin
Remove the indirection through the dma_ops variable, and just return
nommu_dma_ops directly from get_arch_dma_ops.
Signed-off-by: Christoph Hellwig
---
arch/sh/include/asm/dma-mapping.h | 5 ++---
arch/sh/kernel/dma-nommu.c| 8 +---
arch/sh/mm/consistent.c | 3 ---
arch/
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/openrisc/Kconfig | 2 +
arch/openrisc/include/asm/Kbuild| 1 +
arch/openrisc/include/asm/dma-mapping.h | 35 -
arch/openrisc/kernel/dma.c
Signed-off-by: Christoph Hellwig
---
arch/openrisc/kernel/dma.c | 23 ---
1 file changed, 23 deletions(-)
diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c
index 47601274abf7..7cadff93d179 100644
--- a/arch/openrisc/kernel/dma.c
+++ b/arch/openrisc/kernel/d
The cache maintaince in the sync_single_for_device operation should be
equivalent to the map_page operation to facilitate reusing buffers. Fix the
openrisc implementation by moving the cache maintaince performed in map_page
into the sync_single method, and calling that from map_page.
Signed-off-b
openrisc does all the required cache maintainance at dma map time, and none
at unmap time. It thus has to implement sync_single_for_device to match
the map cace for buffer reuse, but there is no point in doing another
invalidation in the sync_single_cpu_case, which in terms of cache
maintainance i
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/nds32/Kconfig | 3 +
arch/nds32/include/asm/Kbuild| 1 +
arch/nds32/include/asm/dma-mapping.h | 14
arch/nds32/kernel/dma.c | 113 +++---
This matches the implementation of the more commonly used unmap_single
routines and the sync_sg_for_cpu method which should provide equivalent
cache maintainance.
Signed-off-by: Christoph Hellwig
---
arch/nds32/kernel/dma.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/nds32/ker
The only difference is that pcxl supports dma coherent allocations, while
pcx only supports non-consistent allocations and otherwise fails.
But dma_alloc* is not in the fast path, and merging these two allows an
easy migration path to the generic dma-noncoherent implementation, so
do it.
Signed-o
Switch to the generic noncoherent direct mapping implementation.
Fix sync_single_for_cpu to do skip the cache flush unless the transfer
is to the device to match the more tested unmap_single path which should
have the same cache coherency implications.
Signed-off-by: Christoph Hellwig
---
arch/
Current the S/G list based DMA ops use flush_kernel_vmap_range which
contains a few UP optimizations, while the rest of the DMA operations
uses flush_kernel_dcache_range. The single vs sg operations are supposed
to have the same effect, so they should use the same routines. Use
the more conservat
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/xtensa/Kconfig | 3 +
arch/xtensa/include/asm/Kbuild| 1 +
arch/xtensa/include/asm/dma-mapping.h | 26 --
arch/xtensa/kernel/pci-dma.c | 130 +++-
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/sh/Kconfig | 3 +-
arch/sh/include/asm/Kbuild| 1 +
arch/sh/include/asm/dma-mapping.h | 26 ---
arch/sh/kernel/Makefile | 2 +-
arch/sh/kernel
Half of the file just contains platform device memory setup code which
is required for all builds, and half contains helpers for dma coherent
allocation, which is only needed if CONFIG_DMA_NONCOHERENT is enabled.
Signed-off-by: Christoph Hellwig
---
arch/sh/kernel/Makefile | 2 +-
arch/sh
Make sure all other DMA methods call nds32_dma_sync_single_for_{device,cpu}
to perform cache maintaince, and remove the consisteny_sync helper that
implemented both with entirely separate code based off an argument.
Signed-off-by: Christoph Hellwig
---
arch/nds32/kernel/dma.c | 140 +
nds32_dma_map_sg is the only of the various DMA operations that tries
to deal with highmem (the single page variants and SG sync routines are
missing, SG unmap is entirely unimplemented), and it does so without
taking into account S/G list items that are bigger than a page, which
are legal and can
Both unused.
Signed-off-by: Christoph Hellwig
---
arch/microblaze/include/asm/pgtable.h | 3 --
arch/microblaze/mm/consistent.c | 45 ---
2 files changed, 48 deletions(-)
diff --git a/arch/microblaze/include/asm/pgtable.h
b/arch/microblaze/include/asm/pgtable.h
i
This methods needs to provide the equivalent of sync_single_for_device
for each S/G list element, but was missing.
Signed-off-by: Christoph Hellwig
---
arch/hexagon/kernel/dma.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/hexagon/Kconfig | 2 +
arch/hexagon/include/asm/Kbuild| 1 +
arch/hexagon/include/asm/dma-mapping.h | 40 ---
arch/hexagon/kernel/dma.c | 148
Switch to the generic noncoherent direct mapping implementation.
Signed-off-by: Christoph Hellwig
---
arch/m68k/Kconfig | 2 +
arch/m68k/include/asm/Kbuild| 1 +
arch/m68k/include/asm/dma-mapping.h | 12 -
arch/m68k/kernel/dma.c | 68 -
Switch to the generic noncoherent direct mapping implementation.
This removes the direction-based optimizations in
sync_{single,sg}_for_{cpu,device} which were marked untestested and
do not match the usually very well tested {un,}map_{single,sg}
implementations.
Signed-off-by: Christoph Hellwig
Hi all,
this series continues consolidating the dma-mapping code, with a focus
on architectures that do not (always) provide cache coherence for DMA.
Three architectures (arm, mips and powerpc) are still left to be
converted later due to complexity of their dma ops selection.
The dma-noncoherent
hexagon does all the required cache maintainance at dma map time, and none
at unmap time. It thus has to implement sync_single_for_device to match
the map cace for buffer reuse, but there is no point in doing another
invalidation in the sync_single_cpu_case, which in terms of cache
maintainance is
On Fri, May 11, 2018 at 10:11:15AM +0100, Russell King - ARM Linux wrote:
> > +void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
> > + size_t size, enum dma_data_direction dir)
>
> Please no. There is a lot of history of these (__dma_page_cpu_to_dev etc)
> functions b
28 matches
Mail list logo