On Fri, 21 Dec 2007 16:59:11 -0500
"Kristian Høgsberg" <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I about to take of for the holidays, but I wanted to give a heads up
> on how DRI2 is progressing and the direction I'm taking.  The dri2
> branches in my personal repos for drm, mesa, xserver and
> xf86-video-intel has the work.  As it is, I can run glxgears under
> DRI2, though resizing the window breaks it right now.  And only AIGLX
> works, since I haven't done DRI2 protocol. Oh well.
> 
> What I'm doing here lets us have both the old XF86DRI and the DRI2
> code paths in the xserver, the DDX drivers and in mesa simultaneously,
> and the choice between the two is done in the DDX driver at startup,
> or you can compile in only one or the other.  Right now I don't expect
> compiling just one of XF86DRI or DRI2 to work, but that's just because
> I haven't tried that yet.
> 
> Ok, with all the caveats out of the way, let me just give a quick
> overview of the design that I've chosen.  What I've come to realize is
> that there is many ways you can do this: clip rects in the kernel or
> not, swap buffer in the X server, kernel or DRI client, allocating
> back buffers in the X server or DRI client etc.  But at the end of the
> day, I don't see a clear winning combination: all these choices have
> advantages and drawbacks, and everything else being equal, I've tried
> to design for these goals:
> 
>  - keep the kernel part simple, i.e. no cliprect or swap buffers there
>  - provide a clean break to get rid of the XF86DRI legacy while retaining ABI
>  - keep the lock for now, but provide a path towards lockless
> 
> So what we have now is a new small DRI2 X module/extension that hooks
> pScreen->ClipNotify and pScreen->HandleExposures to track window
> movement, redirection and cliprect changes.  The DDX driver opens the
> drm fd on its own and checks version and whatever other chipset
> specific setup it needs and then passes the fd to DRI2ScreenInit.
> DRI2 uses a buffer object based sarea, that is composed of a
> collection of sarea blocks, each of which have a common header
> identifying the block and specifying the size.
> 
> The DDX driver can put it's own blocks there if necessary, and that is
> how locking works.  Core DRI2 doesn't know about the DRM lock, so the
> DDX driver has to put a DRILock block in the sarea if it needs one and
> implement locking itself.  So for the intel driver this means that we
> can now push the lock into the EXA callbacks and AIGLX with DRI2 no
> longer need the crazy lock-juggling.  Longer term, we want to get rid
> of the lock entirely, but it's something I'd like to do in a later
> step.  We have the big parts of the lock-less puzzle in place with the
> DRM memory manager and the super ioctl, but there is still a lot of
> places in the DDX driver that holds the lock for various reasons (mode
> setting, ring buffer access, vt switching).  The devil certainly seems
> to be in the details, but by pushing the lock into the DDX driver,
> it's now a DDX decision and we can drop the lock in the near future
> without leaving ugly API scars.
> 
> The DRI2 sarea also changes the drawable timestamp mechanism.  Instead
> of the fixed size drawable table, I'm using the ring buffer design I
> outlined at XDS in Cambridge.  The ring buffer is another block in the
> sarea and is written to only by the X server.  The server post events
> here to describe changes in cliprects, window position and attached
> buffers.  Each DRI2 client maintains its own tail pointer and reads
> out events as it needs them.  One tricky detail about this is that
> when windows are moved, the DRI2 X server module must update the ring
> buffer head pointer atomically with posting the batch buffer to copy
> window contents.  This complication is inherent in the way the X
> server moves windows, and will not go away if we were to move
> cliprects to the kernel.
> 
> Finally, on the mesa side, the DRI drivers get a new entry point that
> the loader must call if it wants to use the DRI driver in DRI2 mode.
> In this mode the DRI driver avoids all the static buffers and looks
> for buffer information in the event ring buffer and allocates private
> back buffers as needed.
> 
> I don't expect the patches to work for anybody at this stage (except,
> maybe if you have a G33), I'm in the process of cleaning up the code
> and working out the corner cases now.  The xf86-video-intel patch
> applies on top the intel-batchbuffer branch, and the mesa patch
> currently conflicts with mesa mater.  Oh, and ignore the I830* calls
> in the DRI2 module :)  But even so, they show the overall direction
> and I'm interested feedback on the approach.
> 
> cheers,
> Kristian

I quickly looked at the code, so i might overlooked it and missed
few points, sorry if so. My concern is how front buffer (ie framebuffer
where the mesa driver have to blit the window as my guess is that
right now this is mesa driver who do that) is signaled to mesa.

So is there one event ring buffer per client or glxcontext (i guess
i myself don't see clearly if there is a difference btw this two
things :)). Or is there an event ring buffer for all client/glxcontext.

In first case i guess the first things the server post in the event
ring is the framebuffer informations (like how to get framebuffer bo
handle, shared zbuffer, ...).

In the second case is there a mecanism for the driver to ask the server
to repost informations on the framebuffer (shared, zbuffer, shared ...) ?

In both case i think mesa part should also be ready to update its view
of where things should be blitted (ie being able to update its view of
how big and what is the bo handle of the framebuffer where it should
perform the blit).

Side question of this what about fullscreen glapp, should their frontbuffer
become the framebuffer ? My understanding being that each client got
their private front&back(if double buffered) which is then blit to the
framebuffer. I guess for fullscreen app we might want being able to give
full gpu juice and avoid the blit btw the front and framebuffer.

Others question what happen if the ring event buffer is full ie the
client/glxcontext is slow at reading it or is buggy and don't read
it anymore. Should this context get kill for not being responsive
maybe such case is already falling down into some X rules.

Hope i am not asking trivial and already solved issues :)

Cheers,
Jerome Glisse <[EMAIL PROTECTED]>

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to