Hi Loïc,

On 12/2/25 07:17, Loïc Molinari wrote:
Add a paragraph to the GEM Objects Creation section about the
drm_gem_huge_mnt_create() helper and to the GEM objects mapping
section explaining how transparent huge pages are handled by GEM.

v4:
- fix wording after huge_pages handler removal

v6:
- fix wording after map_pages handler removal

v11:
- mention drm_gem_huge_mnt_create() helper
- add Boris and Maíra R-bs

Signed-off-by: Loïc Molinari <[email protected]>
Reviewed-by: Bagas Sanjaya <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
Reviewed-by: Maíra Canal <[email protected]>

Just confirming my R-b:

Reviewed-by: Maíra Canal <[email protected]>

Best Regards,
- Maíra

---
  Documentation/gpu/drm-mm.rst | 29 +++++++++++++++++++++++------
  1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index d55751cad67c..f22433470c76 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -155,7 +155,12 @@ drm_gem_object_init() will create an shmfs file of the
  requested size and store it into the struct :c:type:`struct
  drm_gem_object <drm_gem_object>` filp field. The memory is
  used as either main storage for the object when the graphics hardware
-uses system memory directly or as a backing store otherwise.
+uses system memory directly or as a backing store otherwise. Drivers
+can call drm_gem_huge_mnt_create() to create, mount and use a huge
+shmem mountpoint instead of the default one ('shm_mnt'). For builds
+with CONFIG_TRANSPARENT_HUGEPAGE enabled, further calls to
+drm_gem_object_init() will let shmem allocate huge pages when
+possible.
Drivers are responsible for the actual physical pages allocation by
  calling shmem_read_mapping_page_gfp() for each page.
@@ -290,15 +295,27 @@ The open and close operations must update the GEM object 
reference
  count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
  functions directly as open and close handlers.
-The fault operation handler is responsible for mapping individual pages
-to userspace when a page fault occurs. Depending on the memory
-allocation scheme, drivers can allocate pages at fault time, or can
-decide to allocate memory for the GEM object at the time the object is
-created.
+The fault operation handler is responsible for mapping pages to
+userspace when a page fault occurs. Depending on the memory allocation
+scheme, drivers can allocate pages at fault time, or can decide to
+allocate memory for the GEM object at the time the object is created.
Drivers that want to map the GEM object upfront instead of handling page
  faults can implement their own mmap file operation handler.
+In order to reduce page table overhead, if the internal shmem mountpoint
+"shm_mnt" is configured to use transparent huge pages (for builds with
+CONFIG_TRANSPARENT_HUGEPAGE enabled) and if the shmem backing store
+managed to allocate a huge page for a faulty address, the fault handler
+will first attempt to insert that huge page into the VMA before falling
+back to individual page insertion. mmap() user address alignment for GEM
+objects is handled by providing a custom get_unmapped_area file
+operation which forwards to the shmem backing store. For most drivers,
+which don't create a huge mountpoint by default or through a module
+parameter, transparent huge pages can be enabled by either setting the
+"transparent_hugepage_shmem" kernel parameter or the
+"/sys/kernel/mm/transparent_hugepage/shmem_enabled" sysfs knob.
+
  For platforms without MMU the GEM core provides a helper method
  drm_gem_dma_get_unmapped_area(). The mmap() routines will call this to get a
  proposed address for the mapping.

Reply via email to