Am 2020-07-23 um 5:00 a.m. schrieb Christian König:
> We can't pipeline that during eviction because the memory needs
> to be available immediately.
>
> Signed-off-by: Christian König <[email protected]>

Looks good to me.

Reviewed-by: Felix Kuehling <[email protected]>


Alex, in this case the synchronous ttm_bo_wait would be triggering the
eviction fence rather than a delayed delete.

Scheduling an eviction worker, like we currently do, would only add
unnecessary latency here. The best place to do the HMM migration to
system memory synchronously and minimize the wait time here may be in
amdgpu_eviction_flags. That way all the SDMA copies to system memory
pages would already be in the pipe by the time we get to the ttm_bo_wait.

Regards,
  Felix


> ---
>  drivers/gpu/drm/ttm/ttm_bo.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index bc2230ecb7e3..122040056a07 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -651,8 +651,16 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo,
>       placement.num_busy_placement = 0;
>       bdev->driver->evict_flags(bo, &placement);
>  
> -     if (!placement.num_placement && !placement.num_busy_placement)
> -             return ttm_bo_pipeline_gutting(bo);
> +     if (!placement.num_placement && !placement.num_busy_placement) {
> +             ttm_bo_wait(bo, false, false);
> +
> +             ttm_tt_destroy(bo->ttm);
> +
> +             memset(&bo->mem, 0, sizeof(bo->mem));
> +             bo->mem.mem_type = TTM_PL_SYSTEM;
> +             bo->ttm = NULL;
> +             return 0;
> +     }
>  
>       evict_mem = bo->mem;
>       evict_mem.mm_node = NULL;
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to