On Fri, 10 Oct 2025 15:11:54 +0100
Steven Price <[email protected]> wrote:
> On 10/10/2025 11:11, Boris Brezillon wrote:
> > Hook-up drm_gem_dmabuf_{begin,end}_cpu_access() to drm_gem_sync() so
> > that drivers relying on the default prime_dmabuf_ops can still have
> > a way to prepare for CPU accesses from outside the UMD.
> >
> > v2:
> > - New commit
> >
> > Signed-off-by: Boris Brezillon <[email protected]>
> > ---
> > drivers/gpu/drm/drm_prime.c | 36 ++++++++++++++++++++++++++++++++++++
> > include/drm/drm_prime.h | 5 +++++
> > 2 files changed, 41 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > index 43a10b4af43a..03c09f9ab129 100644
> > --- a/drivers/gpu/drm/drm_prime.c
> > +++ b/drivers/gpu/drm/drm_prime.c
> > @@ -823,6 +823,40 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf,
> > struct vm_area_struct *vma)
> > }
> > EXPORT_SYMBOL(drm_gem_dmabuf_mmap);
> >
> > +int drm_gem_dmabuf_begin_cpu_access(struct dma_buf *dma_buf,
> > + enum dma_data_direction direction)
> > +{
> > + struct drm_gem_object *obj = dma_buf->priv;
> > + enum drm_gem_object_access_flags access = DRM_GEM_OBJECT_CPU_ACCESS;
> > +
> > + if (direction == DMA_FROM_DEVICE)
> > + access |= DRM_GEM_OBJECT_READ_ACCESS;
> > + else if (direction == DMA_BIDIRECTIONAL)
> > + access |= DRM_GEM_OBJECT_RW_ACCESS;
> > + else
> > + return -EINVAL;
> > +
> > + return drm_gem_sync(obj, 0, obj->size, access);
> > +}
> > +EXPORT_SYMBOL(drm_gem_dmabuf_begin_cpu_access);
> > +
> > +int drm_gem_dmabuf_end_cpu_access(struct dma_buf *dma_buf,
> > + enum dma_data_direction direction)
> > +{
> > + struct drm_gem_object *obj = dma_buf->priv;
> > + enum drm_gem_object_access_flags access = DRM_GEM_OBJECT_DEV_ACCESS;
> > +
> > + if (direction == DMA_TO_DEVICE)
> > + access |= DRM_GEM_OBJECT_READ_ACCESS;
> > + else if (direction == DMA_BIDIRECTIONAL)
> > + access |= DRM_GEM_OBJECT_RW_ACCESS;
> > + else
> > + return -EINVAL;
> > +
> > + return drm_gem_sync(obj, 0, obj->size, access);
> > +}
> > +EXPORT_SYMBOL(drm_gem_dmabuf_end_cpu_access);
>
> I feel I must be missing something, but why does
> drm_gem_dmabuf_begin_cpu_access() reject DMA_TO_DEVICE and
> drm_gem_dmabuf_end_cpu_access() reject DMA_FROM_DEVICE?
Not really sure what it means to prepare for dev access and synchronize
with what the device might have changed in memory. Sounds like device
-> device synchronization, which is not what this API is for.
Similarly preparing for CPU access with TO_DEVICE (AKA forcing previous
CPU changes to be visible to the device) doesn't make sense either.
>
> My understanding is that these begin/end calls should be bracketing the
> operation and the same direction should be specified for each.
If [1] is correct and the begin/end_cpu_access() is based on the
dma_sync_ semantics, nope, that's not how it's supposed to work. The
way I see it, it just expresses the cache operations you want to take
place around your CPU access.
If you read data from the CPU, you want dir=FROM_DEVICE in your
begin_cpu_access(), so that the CPU caches are invalidated. If you
write from the CPU, you want dir=TO_DEVICE in your end_cpu_access. If
you know you will be reading again soon, you might want to pass
dir=BIDIR in your end_cpu_access(), though I'm not too sure what's the
benefit of that to be honest.
[1]https://elixir.bootlin.com/linux/v6.17.1/source/drivers/gpu/drm/tegra/gem.c#L684