This RFC is a more of a proof of concept then a fully working solution as there
are a few unresolved issues we are hopping to get advise on from people on the
mailing list.
Until now extracting a card either by physical extraction (e.g. eGPU with
thunderbold connection or by emulation through syfs ->
/sys/bus/pci/devices/device_id/remove)
would cause random crashes in user apps. The random crashes in apps were mostly
due to the app having mapped a device backed BO into it's adress space was
still
trying to access the BO while the backing device was gone.
To answer this first problem Christian suggested to fix the handling of mapped
memory in the clients when the device goes away by forcibly unmap all buffers
the user processes has by clearing their respective VMAs mapping the device
BOs. Then when the VMAs try to fill in the page tables again we check in the
fault handler
if the device is removed and if so, return an error. This will generate a
SIGBUS to the application which can then cleanly terminate.
This indeed was done but this in turn created a problem of kernel OOPs were the
OOPSes were due to the fact that while the app was terminating because of the
SIGBUS
it would trigger use after free in the driver by calling to accesses device
structures that were already released from the pci remove sequence.
This we handled by introducing a 'flush' seqence during device removal were we
wait for drm file reference to drop to 0 meaning all user clients directly
using this device terminated.
With this I was able to cleanly emulate device unplug with X and glxgears
running and later emulate device plug back and restart of X and glxgears.
But this use case is only partial and as I see it all the use cases are as
follwing and the questions it raises.
1) Application accesses a BO by opening drm file
1.1) BO is mapped into applications address space (BO is CPU visible) -
this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS
and termination and waiting for drm file refcound to drop to 0
before releasing the device
1.2) BO is not mapped into applcation address space (BO is CPU
invisible) - no solution yet because how we force the application to terminate
in this case ?
2) Application accesses a BO by importing a DMA-BUF
2.1) BO is mapped into applications address space (BO is CPU visible)
- solution is same as 1.1 but instead of waiting for drm file release we wait
for the
imported dma-buf's file release
2.2) BO is not mapped into applcation address space (BO is CPU
invisible) - our solution is to invalidate GPUVM page tables and destroy
backing storage for
all exported BOs which will in turn casue VM faults in the
importing device and then when the importing driver will try to re-attach the
imported BO to
update mappings we return -ENODEV in the import hook which
hopeffuly will cause the user app to terminate.
3) Applcation opens a drm file or imports a dma-bud and holds a reference but
never access any BO or does access but never more after device was unplug - how
would we
force this applcation to termiante before proceeding with device removal
code ? Otherwise the wait in pci remove just hangs for ever.
The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and
I am still testing the others but I will be happy for any advise on all the
described use cases and maybe some alternative and better (more generic)
approach to this like maybe obtaining PIDs of relevant processes through some
revere
mapping from device file and exported dma-buf files and send them SIGKILL -
would this make more sense or any other method ?
Patches 1-3 address 1.1
Patch 4 addresses 2.1
Pathces 5-6 address 2.2
Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081
Andrey Grodzovsky (6):
drm/ttm: Add unampping of the entire device address space
drm/amdgpu: Force unmap all user VMAs on device removal.
drm/amdgpu: Wait for all user clients
drm/amdgpu: Wait for all clients importing out dma-bufs.
drm/ttm: Add destroy flag in TTM BO eviction interface
drm/amdgpu: Use TTM MMs destroy interface
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++-
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++--
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 +
drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +-
drivers/gpu/drm/qxl/qxl_object.c | 4 +-
drivers/gpu/drm/radeon/radeon_object.c | 2 +-
drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++--------
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +--
include/drm/ttm/ttm_bo_api.h | 2 +-
include/drm/ttm/ttm_bo_driver.h | 2 +
16 files changed, 139 insertions(+), 34 deletions(-)
--
2.7.4
_______________________________________________
amd-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/amd-gfx