On Wed, Jan 16, 2019 at 10:24:36AM -0700, Jason Gunthorpe wrote:
> The fact is there is 0 industry interest in using RDMA on platforms
> that can't do HW DMA cache coherency - the kernel syscalls required to
> do the cache flushing on the IO path would just destroy performance to
> the point of mak
On Tue, Jan 15, 2019 at 02:25:01PM -0700, Jason Gunthorpe wrote:
> RDMA needs something similar as well, in this case drivers take a
> struct page * from get_user_pages() and need to have the DMA map fail
> if the platform can't DMA map in a way that does not require any
> additional DMA API calls
On Wed, Jan 16, 2019 at 07:28:13AM +, Koenig, Christian wrote:
> To summarize once more: We have an array of struct pages and want to
> coherently map that to a device.
And the answer to that is very simple: you can't. What is so hard
to understand about? If you want to map arbitrary memory
On Tue, Jan 15, 2019 at 07:13:11PM +, Koenig, Christian wrote:
> Thomas is correct that the interface you propose here doesn't work at
> all for GPUs.
>
> The kernel driver is not informed of flush/sync, but rather just setups
> coherent mappings between system memory and devices.
>
> In ot
On Tue, Jan 15, 2019 at 06:03:39PM +, Thomas Hellstrom wrote:
> In the graphics case, it's probably because it doesn't fit the graphics
> use-cases:
>
> 1) Memory typically needs to be mappable by another device. (the "dma-
> buf" interface)
And there is nothing preventing dma-buf sharing of
On Tue, Jan 15, 2019 at 03:24:55PM +0100, Christian König wrote:
> Yeah, indeed. Bounce buffers are an absolute no-go for GPUs.
>
> If the DMA API finds that a piece of memory is not directly accessible by
> the GPU we need to return an error and not try to use bounce buffers behind
> the surface