This is to avoid queueing jobs to same CPU during XGMI hive reset
because there is a strict timeline for when the reset commands
must reach all the GPUs in the hive.

Signed-off-by: Andrey Grodzovsky <[email protected]>
Reviewed-by: Le Ma <[email protected]>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2ae944c..03b85b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3837,7 +3837,7 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info 
*hive,
                list_for_each_entry(tmp_adev, device_list_handle, 
gmc.xgmi.head) {
                        /* For XGMI run all resets in parallel to speed up the 
process */
                        if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
-                               if (!queue_work(system_highpri_wq, 
&tmp_adev->xgmi_reset_work))
+                               if (!queue_work(system_unbound_wq, 
&tmp_adev->xgmi_reset_work))
                                        r = -EALREADY;
                        } else
                                r = amdgpu_asic_reset(tmp_adev);
-- 
2.7.4

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to