On 3/12/2024 3:55 PM, Jonathan Cavitt wrote:
We cannot write requests to objects without struct pages, so escape
early if the requests are bound to objects that lack them.

Signed-off-by: Jonathan Cavitt <[email protected]>
---

v2: s/vma-obj/vma->obj

  drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index d3a771afb083e..adb4f9e78cb49 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -3313,6 +3313,13 @@ eb_requests_create(struct i915_execbuffer *eb, struct 
dma_fence *in_fence,
        unsigned int i;
for_each_batch_create_order(eb, i) {
+               /* Do not write requests to objects without struct pages. */
+               if (eb->batches[i]->vma &&
+                   !i915_gem_object_has_struct_page(eb->batches[i]->vma->obj)) 
{

As far as I understand, motivation of this patch is to avoid doing execbuf on dmabuf imported BO which are in error state of something. i915_gem_object_has_struct_page()  checks "obj->mem_flags & I915_BO_FLAG_STRUCT_PAGE" which is very i915 specific.

So I think this will not work and will cause regression in existing program which are trying to do the same with valid BO. Unfortunately I don't have any idea how to better detect that at this moment.


Regards,

Nirmoy

+                       out_fence = ERR_PTR(-EINVAL);
+                       return out_fence;
+               }
+
                /* Allocate a request for this batch buffer nice and early. */
                eb->requests[i] = i915_request_create(eb_find_context(eb, i));
                if (IS_ERR(eb->requests[i])) {

Reply via email to