Hi Matthew,
On 15/10/2025 19:27, Matthew Wilcox wrote:
On Wed, Oct 15, 2025 at 05:30:07PM +0200, Loïc Molinari wrote:
This looks fine, no need to resend to fix this, but if you'd written
the previous patch slightly differently, you'd've reduced the amount of
code you moved around in this patch, which would have made it easier to
review.
+ /* Map a range of pages around the faulty address. */
+ do {
+ pfn = page_to_pfn(pages[start_pgoff]);
+ ret = vmf_insert_pfn(vma, addr, pfn);
+ addr += PAGE_SIZE;
+ } while (++start_pgoff <= end_pgoff && ret == VM_FAULT_NOPAGE);
It looks to me like we have an opportunity to do better here by
adding a vmf_insert_pfns() interface. I don't think we should delay
your patch series to add it, but let's not forget to do that; it can
have very good performnce effects on ARM to use contptes.
Agreed. I initially wanted to provide such an interface based on
set_ptes() to benefit from arm64 contptes but thought it'd better be a
distinct patch series.
@@ -617,8 +645,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
[...]
- ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
+ if (drm_gem_shmem_map_pmd(vmf, vmf->address, pages[page_offset])) {
+ ret = VM_FAULT_NOPAGE;
+ goto out;
}
Does this actually work?
Yes, it does. Huge pages are successfully mapped from both map_pages and
fault handlers. Anything wrong with it?
There seems to be an another issue thought. There are failures [1], all
looking like that one [2]. I think it's because map_pages is called with
the RCU read lock taken and the DRM GEM map_pages handler must lock the
GEM object before accessing pages with dma_resv_lock(). The locking doc
says: "If it's not possible to reach a page without blocking, filesystem
should skip it.". Unlocking the RCU read lock in the handler seems wrong
and doing without a map_pages implementation would be unfortunate. What
would you recommend here?
Loïc
[1] https://patchwork.freedesktop.org/series/156001/
[2]
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_156001v1/bat-dg1-7/igt@[email protected]