On 6/29/25 4:03 PM, Rob Clark wrote:

@@ -1121,6 +1124,20 @@ vm_bind_prealloc_count(struct msm_vm_bind_job *job)
/* Flush the remaining range: */
        prealloc_count(job, first, last);
+
+       /*
+        * Now that we know the needed amount to pre-alloc, throttle on pending
+        * VM_BIND jobs if we already have too much pre-alloc memory in flight
+        */
+       ret = wait_event_interruptible(
+                       to_msm_vm(job->vm)->sched.job_scheduled,

Ick! Please don't peek internal fields of the GPU scheduler (or any other API).
If you solve this within your driver for now, please use your own waitqueue
instead.

However, I think it would be even better to move this to a new generic
drm_throttle component as discussed in previous patches. We can also do this
subsequently though.

+                       atomic_read(&job->queue->in_flight_prealloc) <= 1024);
+       if (ret)
+               return ret;
+
+       atomic_add(job->prealloc.count, &job->queue->in_flight_prealloc);
+
+       return 0;
  }

Reply via email to