Jens Owen wrote:
Ian,

Looks like you've been busy. Here are some comments from the 10,000' level. Keep in mind I haven't looked at your branch, yet. So feel free to point me at the code if my comments are way off base.

Ian Romanick wrote:

Most of the changes that were in my previous "Final new code in
texmem-0-0-1 branch" patch are in this patch as well.  Some of the
code, such as the Radeon code-gen updates, has already been committed.

http://www.mail-archive.com/[EMAIL PROTECTED]/msg09490.html

In addition, a few fixes were made to the device-independent
vertical-retrace code.  Michal Daenzer pointed out a couple problems
with the code, and I've made some work-arounds.  The biggest problem
was that the user-mode API has to use 64-bit counters for
buffer-swaps and retraces, but the kernel ioctls only provide 32-bit
counters.  I put a small has in vblank.c that should work around most
of the problems here.

The bulk of the changes were in the GLX support in libGL.so.  As
mentioned with my previous patch, the current reporting mechanism is
wrong.  The extensions returned by
glXGetClientString(dpy,GLX_EXTENSIONS) is the set of extensions
supported by libGL.so.  It has *nothing* to do with the server or and
direct rendering driver.


Keep in mind the spec says "string describing some aspect of the client library". So, I believe it's correct to not include server information. You might be able to make a case for including direct rendering driver information, since that's technically part of the "client library", but the extensions would need to only list extensions that were supported by all heads, since there is no screen parameter to this entry point. Given that limitation, I think an application developer would use the glXQueryExtensionsString to query the screen specific extensions, and only rely on glXGetClientString to figure out which libGL.so they were dealing with.

The extensions returned by
glXQueryExtensionsString is the intersection of the set of extensions
supported by the client (libGL.so) and the server, plus any
client-side extensions (such as GLX_ARB_get_proc_address).  I have
extended this to include any extensions that are direct rendering only
that are supported by the direct rendering driver.  This includes
extensions like GLX_NV_vertex_array_range.


How would you cope with Applications that query a capability, but force indirect rendering? Wouldn't they be misguided into thinking GLX_NV_vertex_array_range was present in the server, when it's probably not available?

GLX_NV_vertex_array_range defines the glXAllocateMemoryNV() function for allocating high-speed (AGP) memory in the client address space. A renderer running in a remote server won't be able to use it. More below.



I believe the DRI is the first project where HW accellerated direct rendering was implemented, but indirect rendering fell back to a software renderer. If we had HW accellerated indirect rendering, I believe these Query functions would work the way they were intended...i.e. the GLX_NV_vertex_array_range would work on an indirect rendering context and should then be advertised by glXQueryExtensionsString.

The GLX_NV_vertex_array_range extension, unfortunately, has no formal spec. It's only implied by the GL_NV_vertex_array_range extension (http://oss.sgi.com/projects/ogl-sample/registry/NV/vertex_array_range.txt).


It says:
    "OpenGL implementations using GLX indirect rendering should fail
    to set up the vertex array range (failing to set the vertex array
    valid bit so the vertex array range functionality is not usable).
    Additionally, glXAllocateMemoryNV always fails to allocate memory
    (returns NULL) when used with an indirect rendering context."

But a few paragraphs earlier it says:

    "Because wglAllocateMemoryNV and wglFreeMemoryNV are not OpenGL
    rendering commands, these commands do not require a current context.
    They operate normally even if called within a Begin/End or while
    compiling a display list."

If glXAllocateMemoryNV() has no notion of a current rendering context, how can it know to return NULL when using an indirect context? These two paragraphs of the spec are in contradiction.

My belief is that glXAllocateMemoryNV() should try to allocate fast (AGP) memory regardless of any knowledge of indirect/direct rendering. Only if using a direct rendering context will there be a potential speed-up by using that memory. If using indirect rendering, libGL will simply read (vertex) data out of that region as if it were ordinary memory, pack it into GLX protocol commands, and ship it over the wire.



In glxextensions.c I track 5 bits for each extension.  Each bit
represents one of the following:

    1. Is the extension supported by libGL.so?
    2. Is the extension supported by the server?
    3. Is the extension supported by the direct rendering driver?
    4. Is the extension client-side only?
    5. Is the extension for direct rendering only?

By looking at the state of those five bits, the function
__glXGetUsableExtensions can determine which extensions should be
exposed by glXQueryExtensionString.


If you can figure out a way to add direct rendering only extensions to glXQueryExtensionString without breaking existing applications (something that's hard to qualify) then that sounds like a nice improvement. However, I wouldn't say the existing reporting mechanism is wrong.

glXQueryExtensionString() is phrased in terms of the X display connection, not direct vs. indirect rendering. I believe it's the intersection of the client-side GLX extensions and server-side GLX extensions, appended with the client-side-only GLX extensions.


-Brian



-------------------------------------------------------
This SF.net email is sponsored by:Crypto Challenge is now open! Get cracking and register here for some mind boggling fun and the chance of winning an Apple iPod:
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0031en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to