Hmm, we should remove the PCIe portion of this change. We just added some extra checks there on amd-kfd-staging that should make it closer to upstreamable. For now, just handle the XGMI case, but return -EINVAL in the else-branch (for other remote VRAM cases).
Regards, Felix From: Russell, Kent Sent: Thursday, November 15, 2018 1:04 PM To: Kuehling, Felix <[email protected]>; [email protected] Cc: Liu, Shaoyun <[email protected]> Subject: Re: [PATCH] drm/amdgpu : Use XGMI mapping when devices on the same hive v2 It was merged to 4.19 on Sept 21. It got missed on the 4.20 rebase. Kent KENT RUSSELL Sr. Software Engineer | Linux Compute Kernel 1 Commerce Valley Drive East Markham, ON L3T 7X6 O +(1) 289-695-2122 | Ext 72122 ________________________________ From: Kuehling, Felix Sent: Thursday, November 15, 2018 12:57:44 PM To: Russell, Kent; [email protected]<mailto:[email protected]> Cc: Russell, Kent; Liu, Shaoyun Subject: RE: [PATCH] drm/amdgpu : Use XGMI mapping when devices on the same hive v2 This change is not suitable for amd-staging-drm-next. PCIe P2P was not enabled on amd-staging-drm-next because it's not reliable yet. This change enables it even in situations that are not safe (including small BAR systems). Why are you porting this change to amd-staging-drm-next? Does anyone depend on XGMI support on this branch? Regards, Felix -----Original Message----- From: amd-gfx <[email protected]<mailto:[email protected]>> On Behalf Of Russell, Kent Sent: Thursday, November 15, 2018 11:54 AM To: [email protected]<mailto:[email protected]> Cc: Russell, Kent <[email protected]<mailto:[email protected]>>; Liu, Shaoyun <[email protected]<mailto:[email protected]>> Subject: [PATCH] drm/amdgpu : Use XGMI mapping when devices on the same hive v2 From: Shaoyun Liu <[email protected]<mailto:[email protected]>> VM mapping will only fall back to P2P if XGMI mapping is not available V2: Rebase onto 4.20 Change-Id: I7a854ab3d5c9958bd45d4fe439ea7e370a092e7a Signed-off-by: Shaoyun Liu <[email protected]<mailto:[email protected]>> Reviewed-by: Felix Kuehling <[email protected]<mailto:[email protected]>> Reviewed-by: Huang Rui <[email protected]<mailto:[email protected]>> Reviewed-by: Christian König <[email protected]<mailto:[email protected]>> Signed-off-by: Kent Russell <[email protected]<mailto:[email protected]>> --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index dad0e23..576d168 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2011,6 +2011,8 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct drm_mm_node *nodes; struct dma_fence *exclusive, **last_update; uint64_t flags; + uint64_t vram_base_offset = adev->vm_manager.vram_base_offset; + struct amdgpu_device *bo_adev; int r; if (clear || !bo) { @@ -2029,9 +2031,19 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, exclusive = reservation_object_get_excl(bo->tbo.resv); } - if (bo) + if (bo) { flags = amdgpu_ttm_tt_pte_flags(adev, bo->tbo.ttm, mem); - else + bo_adev = amdgpu_ttm_adev(bo->tbo.bdev); + if (mem && mem->mem_type == TTM_PL_VRAM && adev != bo_adev) { + if (adev->gmc.xgmi.hive_id && + adev->gmc.xgmi.hive_id == bo_adev->gmc.xgmi.hive_id) { + vram_base_offset = bo_adev->vm_manager.vram_base_offset; + } else { + flags |= AMDGPU_PTE_SYSTEM; + vram_base_offset = bo_adev->gmc.aper_base; + } + } + } else flags = 0x0; if (clear || (bo && bo->tbo.resv == vm->root.base.bo->tbo.resv)) -- 2.7.4 _______________________________________________ amd-gfx mailing list [email protected]<mailto:[email protected]> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________ amd-gfx mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/amd-gfx
