We should always be explicit and allocate a fence slot before adding a
new fence.

Signed-off-by: Matthew Auld <[email protected]>
Cc: Thomas Hellström <[email protected]>
Cc: Lionel Landwerlin <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
Cc: Jon Bloomfield <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Jordan Justen <[email protected]>
Cc: Kenneth Graunke <[email protected]>
Cc: Akeem G Abodunrin <[email protected]>
Reviewed-by: Thomas Hellström <[email protected]>
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 388c85b0f764..da28acb78a88 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -1224,8 +1224,10 @@ static int __igt_mmap_migrate(struct intel_memory_region 
**placements,
                                          expand32(POISON_INUSE), &rq);
        i915_gem_object_unpin_pages(obj);
        if (rq) {
-               dma_resv_add_fence(obj->base.resv, &rq->fence,
-                                  DMA_RESV_USAGE_KERNEL);
+               err = dma_resv_reserve_fences(obj->base.resv, 1);
+               if (!err)
+                       dma_resv_add_fence(obj->base.resv, &rq->fence,
+                                          DMA_RESV_USAGE_KERNEL);
                i915_request_put(rq);
        }
        i915_gem_object_unlock(obj);
-- 
2.36.1

Reply via email to