On Mon, 2025-12-01 at 10:39 -0800, Matthew Brost wrote:
> Use new pending job list iterator and new helper functions in Xe to
> avoid reaching into DRM scheduler internals.

Cool.

Obviously this is your driver, but some comments below which you might
want to take into account.

> 
> Part of this change involves removing pending jobs debug information
> from debugfs and devcoredump. As agreed, the pending job list should
> only be accessed when the scheduler is stopped. However, it's not
> straightforward to determine whether the scheduler is stopped from the
> shared debugfs/devcoredump code path. Additionally, the pending job list
> provides little useful information, as pending jobs can be inferred from
> seqnos and ring head/tail positions. Therefore, this debug information
> is being removed.

This reads a bit like a contradiction to the first sentence.

> 
> v4:
>  - Add comment around DRM_GPU_SCHED_STAT_NO_HANG (Niranjana)

Revision info for just one of 7 revisions?

> 
> Signed-off-by: Matthew Brost <[email protected]>
> Reviewed-by: Niranjana Vishwanathapura <[email protected]>
> ---
>  drivers/gpu/drm/xe/xe_gpu_scheduler.c    |  4 +-
>  drivers/gpu/drm/xe/xe_gpu_scheduler.h    | 33 ++--------
>  drivers/gpu/drm/xe/xe_guc_submit.c       | 81 ++++++------------------
>  drivers/gpu/drm/xe/xe_guc_submit_types.h | 11 ----
>  drivers/gpu/drm/xe/xe_hw_fence.c         | 16 -----
>  drivers/gpu/drm/xe/xe_hw_fence.h         |  2 -
>  6 files changed, 27 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c 
> b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> index f4f23317191f..9c8004d5dd91 100644
> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> @@ -7,7 +7,7 @@
>  
>  static void xe_sched_process_msg_queue(struct xe_gpu_scheduler *sched)
>  {
> -     if (!READ_ONCE(sched->base.pause_submit))
> +     if (!drm_sched_is_stopped(&sched->base))
>               queue_work(sched->base.submit_wq, &sched->work_process_msg);

Sharing the submit_wq is legal. But next-level cleanness would be if
struct drm_gpu_scheduler's internal components wouldn't be touched.
That's kind of a luxury request, though.

>  }
>  
> @@ -43,7 +43,7 @@ static void xe_sched_process_msg_work(struct work_struct *w)
>               container_of(w, struct xe_gpu_scheduler, work_process_msg);
>       struct xe_sched_msg *msg;
>  
> -     if (READ_ONCE(sched->base.pause_submit))
> +     if (drm_sched_is_stopped(&sched->base))
>               return;
>  
>       msg = xe_sched_get_msg(sched);
> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h 
> b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> index dceb2cd0ee5b..664c2db56af3 100644
> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> @@ -56,12 +56,9 @@ static inline void xe_sched_resubmit_jobs(struct 
> xe_gpu_scheduler *sched)
>       struct drm_sched_job *s_job;
>       bool restore_replay = false;
>  
> -     list_for_each_entry(s_job, &sched->base.pending_list, list) {
> -             struct drm_sched_fence *s_fence = s_job->s_fence;
> -             struct dma_fence *hw_fence = s_fence->parent;
> -
> +     drm_sched_for_each_pending_job(s_job, &sched->base, NULL) {
>               restore_replay |= to_xe_sched_job(s_job)->restore_replay;
> -             if (restore_replay || (hw_fence && 
> !dma_fence_is_signaled(hw_fence)))
> +             if (restore_replay || !drm_sched_job_is_signaled(s_job))

So that's where this function is needed. You check whether that job in
the pending_list is signaled. 

>                       sched->base.ops->run_job(s_job);

Aaaaaahm. So you invoke your own callback. But basically just to access
the function pointer I suppose?

Since this is effectively your drm_sched_resubmit_jobs(), it is
definitely desirable to provide a text book example of how to do resets
so that others can follow your usage.

Can't you replace ops->run_job() with a call to your functions where
you push the jobs to the ring, directly?


P.

Reply via email to