The new VM_BIND interface only supported 4K pages. This was problematic as it left performance on the table because GPUs don't have sophisticated TLB and page walker hardware.
Additionally, the HW can only do compression on large (64K) and huge (2M) pages, which is a major performance booster (>50% in some cases). This patchset sets out to add support for larger page sizes and also enable compression and set the compression tags when userspace binds with the corresponding PTE kinds and alignment. It also increments the nouveau version number which allows userspace to use compression only when the kernel actually supports both features and avoid breaking the system if a newer mesa version is paired with an older kernel version. For the associated userspace MR, please see !36450: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36450 - v6: Use drm_WARN_ONCE instead of dev_warn_once. - v5: Add reviewed-by tags, use dev_warn_once() instead of WARN_ON(). - v4: Fix missing parenthesis in second patch in the series. - v3: Add reviewed-by tags, revert page selection logic to v1 behavior. - v2: Implement review comments, change page selection logic. - v1: Initial implementation. Signed-off-by: Mary Guillemard <[email protected]> --- Ben Skeggs (2): drm/nouveau/mmu/gp100: Remove unused/broken support for compression drm/nouveau/mmu/tu102: Add support for compressed kinds Mary Guillemard (2): drm/nouveau/uvmm: Prepare for larger pages drm/nouveau/uvmm: Allow larger pages Mohamed Ahmed (1): drm/nouveau/drm: Bump the driver version to 1.4.1 to report new features drivers/gpu/drm/nouveau/nouveau_drv.h | 4 +- drivers/gpu/drm/nouveau/nouveau_uvmm.c | 102 +++++++++++++++++---- drivers/gpu/drm/nouveau/nouveau_uvmm.h | 1 + drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 69 ++++++++------ drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp10b.c | 4 +- 5 files changed, 131 insertions(+), 49 deletions(-) --- base-commit: a2b0c33e9423cd06133304e2f81c713849059b10 change-id: 20251110-nouveau-compv6-c723a93bc33b Best regards, -- Mary Guillemard <[email protected]>
