Am 06.09.24 um 20:06 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin <[email protected]>

Entities run queue can change during drm_sched_entity_push_job() so make
sure to update the score consistently.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Fixes: d41a39dda140 ("drm/scheduler: improve job distribution with multiple 
queues")

Good catch, that might explain some of the odd behavior we have seen for load balancing.

Reviewed-by: Christian König <[email protected]>

Cc: Nirmoy Das <[email protected]>
Cc: Christian König <[email protected]>
Cc: Luben Tuikov <[email protected]>
Cc: Matthew Brost <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: [email protected]
Cc: <[email protected]> # v5.9+
---
  drivers/gpu/drm/scheduler/sched_entity.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
b/drivers/gpu/drm/scheduler/sched_entity.c
index 62b07ef7630a..2a910c1df072 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -586,7 +586,6 @@ void drm_sched_entity_push_job(struct drm_sched_job 
*sched_job)
        ktime_t submit_ts;
trace_drm_sched_job(sched_job, entity);
-       atomic_inc(entity->rq->sched->score);
        WRITE_ONCE(entity->last_user, current->group_leader);
/*
@@ -612,6 +611,7 @@ void drm_sched_entity_push_job(struct drm_sched_job 
*sched_job)
rq = entity->rq; + atomic_inc(rq->sched->score);
                drm_sched_rq_add_entity(rq, entity);
                spin_unlock(&entity->rq_lock);

Reply via email to