NVreg_PreserveVideoMemoryAllocations behaves well with the 570 driver and kernel 6.14 as in plucky.
Because it works by dumping the allocated memory to a filesystem, here are the edge case scenarios to consider: 1) If NVreg_TemporaryFilePath points to disk-backed storage and the free space on the filesystem is not enough to hold the entirety of the GPU memory used, unpon resume the nvidia-drm driver fails with: nvidia-modeset: ERROR: GPU:0: Failed to bind display engine notify surface descriptor: 0x1a (Ran out of a critical resource, other than memory [NV_ERR_INSUFFICIENT_RESOURCES]) nvidia-modeset: ERROR: GPU:0: Failed to allocate display engine core DMA push buffer nvidia-modeset: ERROR: GPU:0: Failed to bind display engine notify surface descriptor: 0x1a (Ran out of a critical resource, other than memory [NV_ERR_INSUFFICIENT_RESOURCES]) nvidia-modeset: ERROR: GPU:0: Failed to allocate display engine core DMA push buffer [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state [drm:__nv_drm_connector_detect_internal [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00002900] Failed to detect display state 2) If NVreg_TemporaryFilePath points to tmpfs and the free space on the filesystem is not enough, unpon resume we're presented with the same issue as (1). This is solvable by creating an ad-hoc tmpfs that is guaranteed to be as big as the sum of all NVIDIA GPUs' memory, but then we might enter issue (3). 3) If NVreg_TemporaryFilePath points to tmpfs and the free system memory+swap space is not enough, the system hangs while trying to enter sleep and the only option is to force reboot. For desktop users this is arguably not much worse than the status quo. However, for laptop users (where the primary GPU is the integrated one) enabling NVreg_PreserveVideoMemoryAllocations risks worsening the experience if they hit the analyzed edge case scenarios. -- You received this bug notification because you are a member of Ubuntu-X, which is subscribed to nvidia-graphics-drivers-560 in Ubuntu. https://bugs.launchpad.net/bugs/1876632 Title: [nvidia] Corrupted/missing textures when switching users, switching VTs or resuming from suspend To manage notifications about this bug go to: https://bugs.launchpad.net/gnome-shell/+bug/1876632/+subscriptions _______________________________________________ Mailing list: https://launchpad.net/~ubuntu-x-swat Post to : ubuntu-x-swat@lists.launchpad.net Unsubscribe : https://launchpad.net/~ubuntu-x-swat More help : https://help.launchpad.net/ListHelp