On Thu, Nov 20, 2025 at 12:33:12PM -0800, Umesh Nerlige Ramappa wrote:
> On Wed, Nov 19, 2025 at 02:41:06PM -0800, Matthew Brost wrote:
> > We now have proper infrastructure to accurately check the LRC timestamp
> > without toggling the scheduling state for non-VFs. For VFs, it is still
> > possible to get an inaccurate view if the context is on hardware. We
> > guard against free-running contexts on VFs by banning jobs whose
> > timestamps are not moving. In addition, VFs have a timeslice quantum
> > that naturally triggers context switches when more than one VF is
> > running, thus updating the LRC timestamp.
> > 
> > For multi-queue, it is desirable to avoid scheduling toggling in the TDR
> > because this scheduling state is shared among many queues. Furthermore,
> > this change simplifies the GuC state machine. The trade-off for VF cases
> > seems worthwhile.
> > 
> > Signed-off-by: Matthew Brost <[email protected]>
> > ---
> > drivers/gpu/drm/xe/xe_guc_submit.c      | 100 ++++++------------------
> > drivers/gpu/drm/xe/xe_sched_job.c       |   1 +
> > drivers/gpu/drm/xe/xe_sched_job_types.h |   2 +
> > 3 files changed, 28 insertions(+), 75 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c 
> > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index 1f2afad1766e..7404716e979f 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -68,9 +68,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
> > #define EXEC_QUEUE_STATE_KILLED                     (1 << 7)
> > #define EXEC_QUEUE_STATE_WEDGED                     (1 << 8)
> > #define EXEC_QUEUE_STATE_BANNED                     (1 << 9)
> > -#define EXEC_QUEUE_STATE_CHECK_TIMEOUT             (1 << 10)
> > -#define EXEC_QUEUE_STATE_PENDING_RESUME            (1 << 11)
> > -#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT  (1 << 12)
> > +#define EXEC_QUEUE_STATE_PENDING_RESUME            (1 << 10)
> > 
> 
> ... snip ...
> 
> > static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue
> > *q)
> > {
> >     return (atomic_read(&q->guc->state) &
> > @@ -996,7 +964,7 @@ static bool check_timeout(struct xe_exec_queue *q, 
> > struct xe_sched_job *job)
> >     u32 ctx_timestamp, ctx_job_timestamp;
> >     u32 timeout_ms = q->sched_props.job_timeout_ms;
> >     u32 diff;
> > -   u64 running_time_ms;
> > +   u64 running_time_ms, old_timestamp;
> > 
> >     if (!xe_sched_job_started(job)) {
> >             xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, 
> > guc_id=%d, not started",
> > @@ -1006,7 +974,17 @@ static bool check_timeout(struct xe_exec_queue *q, 
> > struct xe_sched_job *job)
> >             return xe_sched_invalidate_job(job, 2);
> >     }
> > 
> > -   ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(q->lrc[0]));
> > +   ctx_timestamp = lower_32_bits(xe_lrc_update_timestamp(q->lrc[0],
> > +                                                         &old_timestamp));
> 
> Reg: xe_lrc_update_timestamp()
> 
> The way context utilization is using this helper is to accumulate the 'new -
> old' values each time this function is called. In the below example, context
> utilization will loose some ticks.
> 
> Example:
> 
> 1. This code calls xe_lrc_update_timestamp() to sample the timestamp for TDR
> purposes. Say context/job is running, then the lrc->ctx_timestamp is updated
> (moved forward).
> 
> 2. The context utilization code calls xe_lrc_update_timestamp(). Within this
> helper
> - old_ts is sampled as lrc->ctx_timestamp
> - new_ts is calculated based on whether the job/context is active
> - lrc->ctx_timestamp is updated to the new value.
> 
> The result is that we lost one chunk of utilization because of the previous
> call from the TDR path. I think some refactor would be needed to fix that.
> 
> The other comment you already mentioned offline is locking, which I think we
> should add to protect lrc->ctx_timestamp. I don't know if a refactor will
> avoid the lock though.
> 

I agree with you analysis here - thanks for the help.

How about - we extract the following code from
xe_exec_queue_update_run_ticks into helper that also returns the current
timestamp and is also protected by an queue spin lock:

         new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
         q->xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
 
It harmless if the TDR also updates run_ticks when it samples the LRC
timestamp, right? Also the helper just skips run_ticks if q->xef is
NULL.

Matt

> Thanks,
> Umesh
> 
> > +   if (ctx_timestamp == job->sample_timestamp) {
> > +           xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, 
> > guc_id=%d, timestamp stuck",
> > +                      xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
> > +                      q->guc->id);
> > +
> > +           return xe_sched_invalidate_job(job, 2);
> > +   }
> > +
> > +   job->sample_timestamp = ctx_timestamp;
> >     ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
> > 
> >     /*
> > @@ -1135,16 +1113,17 @@ guc_exec_queue_timedout_job(struct drm_sched_job 
> > *drm_job)
> >     }
> > 
> 
> 
> ... snip ...
> 
> > diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h
> > b/drivers/gpu/drm/xe/xe_sched_job_types.h
> > index d26612abb4ca..ad5eee8a8cdb 100644
> > --- a/drivers/gpu/drm/xe/xe_sched_job_types.h
> > +++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
> > @@ -59,6 +59,8 @@ struct xe_sched_job {
> >     u32 lrc_seqno;
> >     /** @migrate_flush_flags: Additional flush flags for migration jobs */
> >     u32 migrate_flush_flags;
 > >+   /** @sample_timestamp: Sampling of job timestamp in TDR */
> > +   u64 sample_timestamp;
> >     /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. 
> > */
> >     bool ring_ops_flush_tlb;
> >     /** @ggtt: mapped in ggtt. */
> > -- 
> > 2.34.1
> > 

Reply via email to