On Thu, Mar 05, 2026 at 09:27:11AM +0100, Boris Brezillon wrote:

I addressed most of your comments in a chained reply to Phillip, but I
guess he dropped some of your email and thus missed those. Responding
below.

> Hi Matthew,
> 
> On Wed, 4 Mar 2026 18:04:25 -0800
> Matthew Brost <[email protected]> wrote:
> 
> > On Wed, Mar 04, 2026 at 02:51:39PM -0800, Chia-I Wu wrote:
> > > Hi,
> > > 
> > > Our system compositor (surfaceflinger on android) submits gpu jobs
> > > from a SCHED_FIFO thread to an RT gpu queue. However, because
> > > workqueue threads are SCHED_NORMAL, the scheduling latency from submit
> > > to run_job can sometimes cause frame misses. We are seeing this on
> > > panthor and xe, but the issue should be common to all drm_sched users.
> > >   
> > 
> > I'm going to assume that since this is a compositor, you do not pass
> > input dependencies to the page-flip job. Is that correct?
> > 
> > If so, I believe we could fairly easily build an opt-in DRM sched path
> > that directly calls run_job in the exec IOCTL context (I assume this is
> > SCHED_FIFO) if the job has no dependencies.
> 
> I guess by ::run_job() you mean something slightly more involved that
> checks if:
> 
> - other jobs are pending
> - enough credits (AKA ringbuf space) is available
> - and probably other stuff I forgot about
> 
> > 
> > This would likely break some of Xe’s submission-backend assumptions
> > around mutual exclusion and ordering based on the workqueue, but that
> > seems workable. I don’t know how the Panthor code is structured or
> > whether they have similar issues.
> 
> Honestly, I'm not thrilled by this fast-path/call-run_job-directly idea
> you're describing. There's just so many things we can forget that would
> lead to races/ordering issues that will end up being hard to trigger and
> debug. Besides, it doesn't solve the problem where your gfx pipeline is
> fully stuffed and the kernel has to dequeue things asynchronously. I do
> believe we want RT-prio support in that case too.
> 

My understanding of SurfaceFlinger is that it never waits on input
dependencies from rendering applications, since those may not signal in
time for a page flip. Because of that, you can’t have the job(s) that
draw to the screen accept input dependencies. Maybe I have that
wrong—but I've spoken to the Google team several times about issues with
SurfaceFlinger, and that was my takeaway.

So I don't think the kernel should ever have to dequeue things
asynchronously, at least for SurfaceFlinger. If there is another RT use
case that requires input dependencies plus the kernel dequeuing things
asynchronously, I agree this wouldn’t help—but my suggestion also isn’t
mutually exclusive with other RT rework either.

> > 
> > I can try to hack together a quick PoC to see what this would look like
> > and give you something to test.
> > 
> > > Using a WQ_HIGHPRI workqueue helps, but it is still not RT (and won't
> > > meet future android requirements). It seems either workqueue needs to
> > > gain RT support, or drm_sched needs to support kthread_worker.  
> > 
> > +Tejun to see if RT workqueue is in the plans.
> 
> Dunno how feasible that is, but that would be my preferred option.
> 
> > 
> > > 
> > > I know drm_sched switched from kthread_worker to workqueue for better
> > > scaling when xe was introduced. But if drm_sched can support either
> > > workqueue or kthread_worker during drm_sched_init, drivers can
> > > selectively use kthread_worker only for RT gpu queues. And because
> > > drivers require CAP_SYS_NICE for RT gpu queues, this should not cause
> > > scaling issues.
> > >   
> > 
> > I don’t think having two paths will ever be acceptable, nor do I think
> > supporting a kthread would be all that easy. For example, in Xe we queue
> > additional work items outside of the scheduler on the queue for ordering
> > reasons — we’d have to move all of that code down into DRM sched or
> > completely redesign our submission model to avoid this. I’m not sure if
> > other drivers also do this, but it is allowed.
> 
> Panthor doesn't rely on the serialization provided by the single-thread
> workqueue, Panfrost might rely on it though (I don't remember). I agree
> that maintaining a thread and workqueue based scheduling is not ideal
> though.
> 
> > 
> > > Thoughts? Or perhaps this becomes less of an issue if all drm_sched
> > > users have concrete plans for userspace submissions..  
> > 
> > Maybe some day....
> 
> I've yet to see a solution where no dma_fence-based signalization is
> involved in graphics workloads though (IIRC, Arm's solution still
> needs the kernel for that). Until that happens, we'll still need the
> kernel to signal fences asynchronously when the job is done, which I
> suspect will cause the same kind of latency issue...
> 

I don't think that is the problem here. Doesn’t the job that draws the
frame actually draw it, or does the display wait on the draw job’s fence
to signal and then do something else? (Sorry—I know next to nothing
about display.) Either way, fences should be signaled in IRQ handlers,
which presumably don’t have the same latency issues as workqueues, but I
could be mistaken.

Matt

> Regards,
> 
> Boris

Reply via email to