On Thu, Mar 5, 2026 at 12:52 PM Matthew Brost <[email protected]> wrote:
>
> On Thu, Mar 05, 2026 at 11:52:01AM +0100, Boris Brezillon wrote:
> > On Thu, 5 Mar 2026 02:09:16 -0800
> > Matthew Brost <[email protected]> wrote:
> >
> > > On Thu, Mar 05, 2026 at 09:27:11AM +0100, Boris Brezillon wrote:
> > >
> > > I addressed most of your comments in a chained reply to Phillip, but I
> > > guess he dropped some of your email and thus missed those. Responding
> > > below.
> > >
> > > > Hi Matthew,
> > > >
> > > > On Wed, 4 Mar 2026 18:04:25 -0800
> > > > Matthew Brost <[email protected]> wrote:
> > > >
> > > > > On Wed, Mar 04, 2026 at 02:51:39PM -0800, Chia-I Wu wrote:
> > > > > > Hi,
> > > > > >
> > > > > > Our system compositor (surfaceflinger on android) submits gpu jobs
> > > > > > from a SCHED_FIFO thread to an RT gpu queue. However, because
> > > > > > workqueue threads are SCHED_NORMAL, the scheduling latency from 
> > > > > > submit
> > > > > > to run_job can sometimes cause frame misses. We are seeing this on
> > > > > > panthor and xe, but the issue should be common to all drm_sched 
> > > > > > users.
> > > > > >
> > > > >
> > > > > I'm going to assume that since this is a compositor, you do not pass
> > > > > input dependencies to the page-flip job. Is that correct?
> > > > >
> > > > > If so, I believe we could fairly easily build an opt-in DRM sched path
> > > > > that directly calls run_job in the exec IOCTL context (I assume this 
> > > > > is
> > > > > SCHED_FIFO) if the job has no dependencies.
> > > >
> > > > I guess by ::run_job() you mean something slightly more involved that
> > > > checks if:
> > > >
> > > > - other jobs are pending
> > > > - enough credits (AKA ringbuf space) is available
> > > > - and probably other stuff I forgot about
> > > >
> > > > >
> > > > > This would likely break some of Xe’s submission-backend assumptions
> > > > > around mutual exclusion and ordering based on the workqueue, but that
> > > > > seems workable. I don’t know how the Panthor code is structured or
> > > > > whether they have similar issues.
> > > >
> > > > Honestly, I'm not thrilled by this fast-path/call-run_job-directly idea
> > > > you're describing. There's just so many things we can forget that would
> > > > lead to races/ordering issues that will end up being hard to trigger and
> > > > debug. Besides, it doesn't solve the problem where your gfx pipeline is
> > > > fully stuffed and the kernel has to dequeue things asynchronously. I do
> > > > believe we want RT-prio support in that case too.
> > > >
> > >
> > > My understanding of SurfaceFlinger is that it never waits on input
> > > dependencies from rendering applications, since those may not signal in
> > > time for a page flip. Because of that, you can’t have the job(s) that
> > > draw to the screen accept input dependencies. Maybe I have that
> > > wrong—but I've spoken to the Google team several times about issues with
> > > SurfaceFlinger, and that was my takeaway.
> > >
> > > So I don't think the kernel should ever have to dequeue things
> > > asynchronously, at least for SurfaceFlinger.
> >
> > There's still the contention coming from the ring buffer size, which can
> > prevent jobs from being queued directly to the HW, though, admittedly,
> > if the HW is not capable of compositing the frame faster than the
> > refresh rate, and guarantee an almost always empty ringbuffer, fixing
> > the scheduling prio is probably pointless.
> >
> > > If there is another RT use
> > > case that requires input dependencies plus the kernel dequeuing things
> > > asynchronously, I agree this wouldn’t help—but my suggestion also isn’t
> > > mutually exclusive with other RT rework either.
> >
> > Yeah, dunno. It just feels like another hack on top of the already quite
> > convoluted design that drm_sched has become.
> >
>
> I agree we wouldn't want this to become some wild hack.
>
> I could actually see this helping in other very timing-sensitive
> paths—for example, page-fault paths where a copy job needs to be issued
> as part of the fault resolution to a dedicated kernel queue. I’ve seen
> noise in fault profiling caused by delays in the scheduler workqueue,
> which needs to program the job to the device. In paths like this, every
> microsecond matters, as even minor improvements have real-world impacts
> on performance numbers. This will become even more noticeable as
> CPU<->GPU bus speeds increase. In this case, typically copy jobs have
> no input dependencies, thus the desire is to program the ring as quickly
> as possible.
>
> > >
> > > > >
> > > > > I can try to hack together a quick PoC to see what this would look 
> > > > > like
> > > > > and give you something to test.
> > > > >
> > > > > > Using a WQ_HIGHPRI workqueue helps, but it is still not RT (and 
> > > > > > won't
> > > > > > meet future android requirements). It seems either workqueue needs 
> > > > > > to
> > > > > > gain RT support, or drm_sched needs to support kthread_worker.
> > > > >
> > > > > +Tejun to see if RT workqueue is in the plans.
> > > >
> > > > Dunno how feasible that is, but that would be my preferred option.
> > > >
> > > > >
> > > > > >
> > > > > > I know drm_sched switched from kthread_worker to workqueue for 
> > > > > > better
> > > > > > scaling when xe was introduced. But if drm_sched can support either
> > > > > > workqueue or kthread_worker during drm_sched_init, drivers can
> > > > > > selectively use kthread_worker only for RT gpu queues. And because
> > > > > > drivers require CAP_SYS_NICE for RT gpu queues, this should not 
> > > > > > cause
> > > > > > scaling issues.
> > > > > >
> > > > >
> > > > > I don’t think having two paths will ever be acceptable, nor do I think
> > > > > supporting a kthread would be all that easy. For example, in Xe we 
> > > > > queue
> > > > > additional work items outside of the scheduler on the queue for 
> > > > > ordering
> > > > > reasons — we’d have to move all of that code down into DRM sched or
> > > > > completely redesign our submission model to avoid this. I’m not sure 
> > > > > if
> > > > > other drivers also do this, but it is allowed.
> > > >
> > > > Panthor doesn't rely on the serialization provided by the single-thread
> > > > workqueue, Panfrost might rely on it though (I don't remember). I agree
> > > > that maintaining a thread and workqueue based scheduling is not ideal
> > > > though.
> > > >
> > > > >
> > > > > > Thoughts? Or perhaps this becomes less of an issue if all drm_sched
> > > > > > users have concrete plans for userspace submissions..
> > > > >
> > > > > Maybe some day....
> > > >
> > > > I've yet to see a solution where no dma_fence-based signalization is
> > > > involved in graphics workloads though (IIRC, Arm's solution still
> > > > needs the kernel for that). Until that happens, we'll still need the
> > > > kernel to signal fences asynchronously when the job is done, which I
> > > > suspect will cause the same kind of latency issue...
> > > >
> > >
> > > I don't think that is the problem here. Doesn’t the job that draws the
> > > frame actually draw it, or does the display wait on the draw job’s fence
> > > to signal and then do something else?
> >
> > I know close to nothing about SurfaceFlinger and very little about
> > compositors in general, so I'll let Chia answer that one. What's sure
>
> I think Chia input would good, as if SurfaceFlinger jobs have input
> dependencies this entire suggestion doesn't make any sense.
>
> > is that, on regular page-flips (don't remember what async page-flips
> > do), the display drivers wait on the fences attached to the buffer to
> > signal before doing the flip.
>
> I think SurfaceFlinger is different compared to Wayland/X11 use cases,
> as maintaining a steady framerate is the priority above everything else
> (think phone screens, which never freeze, whereas desktops do all the
> time). So I believe SurfaceFlinger decides when it will submit the job
> to draw a frame, without directly passing in application dependencies
> into the buffer/job being drawn. Again, my understanding here may be
> incorrect...
That is correct. SurfaceFlinger only ever latches buffers whose
associated fences have signaled, and sends down the buffers to gpu for
composition or to the display for direct scanout. That might also be
how modern wayland compositors work nowadays? It sounds bad to let a
low fps app slow down system composition.

In theory, the gpu driver should not see input dependencies ever. I
will need to check if there are corner cases.


>
> >
> > > (Sorry—I know next to nothing
> > > about display.) Either way, fences should be signaled in IRQ handlers,
> >
> > In Panthor they are not, but that's probably something for us to
> > address.
Yeah, I am also looking into signaling fences from the (threaded) irq handler.

> >
> > > which presumably don’t have the same latency issues as workqueues, but I
> > > could be mistaken.
> >
> > Might have to do with the mental model I had of this "reconcile
> > Usermode queues with dma_fence signaling" model, where I was imagining
> > a SW job queue (based on drm_sched too) that would wait on HW fences to
> > be signal and would as a result signal the dma_fence attached to the
> > job. So the queueing/dequeuing of these jobs would still happen through
> > drm_sched, with the same scheduling prio issue. This being said, those
>
> Yes, if jobs have unmet dependencies, the bypass path doesn’t help with
> the DRM scheduler workqueue context switches being slow as that path
> needs to be taken in taken in this cases.
>
> Also, to bring up something insane we certainly wouldn’t want to do:
> calling run_job when dependencies are resolved in the fence callback,
> since we could be in an IRQ handler.
>
> Matt
>
> > jobs would likely be dependency less, so more likely to hit your
> > fast-path-run-job.

Reply via email to