On Sun, Oct 17, 2010 at 12:58 PM, Aaron Plattner <[email protected]> wrote: > This change does not solve the problem it claims to solve because it > makes two assumptions about the driver that are not always true: > > 1. that it sends rendering requests to the GPU immediately, which can > kill performance on some GPUs, and
No. We batch up rendering commands in a userspace command buffer and the X server batches up outgoing events in a per client protocol buffer. We flush the command batch to the kernel just before the server flushes its protocol buffers, using the flush callback chain. This way we make sure that when we send out damage events, the render has been submitted already, and still (typically) batch up all rendering between one set of wake and block. The reason for this patch is that when the client protocol buffer overflows, the X server ends up flushing the damage events queued so far *plus* the one it was trying to add to the protocol buffer (that's how it works, it uses writev to send the protocol buffer *and* the event that overflowed the buffer). If we're using pre-op damage reporting, that means that the rendering command corresponding to that last damage event has not yet been added to the batch buffer and the client will receive the damage event before we queue up the rendering. Switching to post-op reporting fixes this race, and from a client point of view, all we ever did was post-op reporting. Pre-op reporting just can't work when the damage listener is in a different process - the X server would have to send the event immediately and wait for the client to acknowledge it before it could submit the rendering. So in principle there should be no sematic difference from the client point of view. Unfortunately, there seems to be a problem with the post-op reporting - it's not a code path that we've really used before. > 2. that the GPU processes all requests from multiple clients in order, > which is false for some GPUs. Yes, that's an assumption I make. I understand that it's not a fix for hardware with multiple queues, but it allows single queue hardware to work correctly. The back story is that we used to have a roundtrip in the glXBindTexImage path, which as a side effect made the intel driver flush its rendering batch buffers. When I eliminated the round trip, we ended up with damage events and rendering commands no longer in sync. The post-op patch plus an intel driver patch to submit the rendering from the flush callback chain (instead of just block handler) fixed that regression. So I just fixed a regression. I'm not claiming to solve anything for other drivers. > All a Damage event tells you is that your next X request will be > procesed after the damage has landed, not that the damage has already > landed or even that the rendering commands that cause the damage have > been submitted to the GPU. > > In addition to the above, this commit also broke the Compiz > "Wallpaper" plugin. > > This reverts commit 8d7b7a0d71e0b89321b3341b781bc8845386def6. Sure, we can revert it. I never saw the corner case it fixes trigger, we usually never actually fill up a protocol buffer with damage events. The only case I saw that would flush a protocol buffer was sending out xkb keymaps. Acked-by: Kristian Høgsberg <[email protected]> > --- > Fixing this correctly requires spec changes, which is the purpose of James > Jones's damage sync stuff. If you're reading this, you should go review > that now. :) > > damageext/damageext.c | 1 - > 1 files changed, 0 insertions(+), 1 deletions(-) > > diff --git a/damageext/damageext.c b/damageext/damageext.c > index b4bb478..f5265dd 100644 > --- a/damageext/damageext.c > +++ b/damageext/damageext.c > @@ -217,7 +217,6 @@ ProcDamageCreate (ClientPtr client) > if (!AddResource (stuff->damage, DamageExtType, (pointer) pDamageExt)) > return BadAlloc; > > - DamageSetReportAfterOp (pDamageExt->pDamage, TRUE); > DamageRegister (pDamageExt->pDrawable, pDamageExt->pDamage); > > if (pDrawable->type == DRAWABLE_WINDOW) > -- > 1.7.0.4 > > _______________________________________________ > [email protected]: X.Org development > Archives: http://lists.x.org/archives/xorg-devel > Info: http://lists.x.org/mailman/listinfo/xorg-devel > _______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
