It appears the timeout can still be enabled when we reach that point,
because of the asynchronous progress check done on queues that resets
the timer when jobs are still in-flight, but progress was made.
We could add more checks to make sure the timer is not re-enabled when
a group can't run anymore, but we don't have a group to pass to
queue_check_job_completion() in some context.

It's just as safe (we just want to be sure the timer is stopped before
we destroy the queue) and simpler to drop the WARN_ON() in
group_free_queue().

v2:
- Collect R-bs

Signed-off-by: Boris Brezillon <[email protected]>
Reviewed-by: Liviu Dudau <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
---
 drivers/gpu/drm/panthor/panthor_sched.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_sched.c 
b/drivers/gpu/drm/panthor/panthor_sched.c
index 389d508b3848..203f6a0a6b9a 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -893,9 +893,8 @@ static void group_free_queue(struct panthor_group *group, 
struct panthor_queue *
        if (IS_ERR_OR_NULL(queue))
                return;
 
-       /* This should have been disabled before that point. */
-       drm_WARN_ON(&group->ptdev->base,
-                   disable_delayed_work_sync(&queue->timeout.work));
+       /* Disable the timeout before tearing down drm_sched components. */
+       disable_delayed_work_sync(&queue->timeout.work);
 
        if (queue->entity.fence_context)
                drm_sched_entity_destroy(&queue->entity);
-- 
2.51.1

Reply via email to