> -----Original Message-----
> From: Jason Gunthorpe <[email protected]>
> Sent: Friday, November 06, 2020 4:49 AM
> To: Xiong, Jianxin <[email protected]>
> Cc: [email protected]; [email protected]; Doug Ledford
> <[email protected]>; Leon Romanovsky
> <[email protected]>; Sumit Semwal <[email protected]>; Christian Koenig
> <[email protected]>; Vetter, Daniel
> <[email protected]>
> Subject: Re: [PATCH v8 4/5] RDMA/mlx5: Support dma-buf based userspace memory
> region
>
> On Fri, Nov 06, 2020 at 01:11:38AM +0000, Xiong, Jianxin wrote:
> > > On Thu, Nov 05, 2020 at 02:48:08PM -0800, Jianxin Xiong wrote:
> > > > @@ -966,7 +969,10 @@ static struct mlx5_ib_mr
> > > > *alloc_mr_from_cache(struct ib_pd *pd,
> > > > struct mlx5_ib_mr *mr;
> > > > unsigned int page_size;
> > > >
> > > > - page_size = mlx5_umem_find_best_pgsz(umem, mkc, log_page_size,
> > > > 0, iova);
> > > > + if (umem->is_dmabuf)
> > > > + page_size = ib_umem_find_best_pgsz(umem, PAGE_SIZE,
> > > > iova);
> > >
> > > You said the sgl is not set here, why doesn't this crash? It is certainly
> > > wrong to call this function without a SGL.
> >
> > The sgl is NULL, and nmap is 0. The 'for_each_sg' loop is just skipped and
> > won't crash.
>
> Just wire this to 4k it is clearer than calling some no-op pgsz
Ok
>
>
> > > > + if (!mr->cache_ent) {
> > > > + mlx5_core_destroy_mkey(mr->dev->mdev, &mr->mmkey);
> > > > + WARN_ON(mr->descs);
> > > > + }
> > > > +}
> > >
> > > I would expect this to call ib_umem_dmabuf_unmap_pages() ?
> > >
> > > Who calls it on the dereg path?
> > >
> > > This looks quite strange to me, it calls ib_umem_dmabuf_unmap_pages()
> > > only from the invalidate callback?
> >
> > It is also called from ib_umem_dmabuf_release().
>
> Hmm, that is no how the other APIs work, the unmap should be paired with the
> map in the caller, and the sequence for destroy should be
>
> invalidate
> unmap
> destroy_mkey
> release_umem
>
> I have another series coming that makes the other three destroy flows much
> closer to that ideal.
>
Can fix that.
> > > I feel uneasy how this seems to assume everything works sanely, we
> > > can have parallel page faults so pagefault_dmabuf_mr() can be called
> > > multiple times after an invalidation, and it doesn't protect itself
> against calling ib_umem_dmabuf_map_pages() twice.
> > >
> > > Perhaps the umem code should keep track of the current map state and
> > > exit if there is already a sgl. NULL or not NULL sgl would do and seems
> > > quite reasonable.
> >
> > Ib_umem_dmabuf_map() already checks the sgl and will do nothing if it is
> > already set.
>
> How? What I see in patch 1 is an unconditonal call to
> dma_buf_map_attachment() ?
My bad. I misread the lines. It used to be there (in v3) but somehow got lost.
>
> > > > + if (is_dmabuf_mr(mr))
> > > > + return pagefault_dmabuf_mr(mr, umem_dmabuf,
> > > > user_va,
> > > > + bcnt, bytes_mapped,
> > > > flags);
> > >
> > > But this doesn't care about user_va or bcnt it just triggers the whole
> > > thing to be remapped, so why calculate it?
> >
> > The range check is still needed, in order to catch application errors
> > of using incorrect address or count in verbs command. Passing the
> > values further in is to allow pagefault_dmabuf_mr to generate return
> > value and set bytes_mapped in a way consistent with the page fault
> > handler chain.
>
> The HW validates the range. The range check in the ODP case is to protect
> against a HW bug that would cause the kernel to malfunction.
> For dmabuf you don't need to do it
Ok. So the handler can simply return 0 (as the number of pages mapped) and
leave bytes_mapped untouched?
>
> Jason
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel