Be pessimistic and presume that we actually allocate every page we
exercise via the mock_gtt (e.g. for gvt). In which case we have to keep
our working case under the available physical memory to prevent oom.

Signed-off-by: Chris Wilson <[email protected]>
Cc: Matthew Auld <[email protected]>
---
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c 
b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 600a3bcbd3d6..8e2e269db97e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -1244,6 +1244,7 @@ static int exercise_mock(struct drm_i915_private *i915,
                                     u64 hole_start, u64 hole_end,
                                     unsigned long end_time))
 {
+       const u64 limit = totalram_pages << PAGE_SHIFT;
        struct i915_gem_context *ctx;
        struct i915_hw_ppgtt *ppgtt;
        IGT_TIMEOUT(end_time);
@@ -1256,7 +1257,7 @@ static int exercise_mock(struct drm_i915_private *i915,
        ppgtt = ctx->ppgtt;
        GEM_BUG_ON(!ppgtt);
 
-       err = func(i915, &ppgtt->vm, 0, ppgtt->vm.total, end_time);
+       err = func(i915, &ppgtt->vm, 0, min(ppgtt->vm.total, limit), end_time);
 
        mock_context_close(ctx);
        return err;
-- 
2.18.0

_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to