[Dri-devel] Streaming video through glTexSubImage2D
Currently, the performance when streaming video through glTexSubImage2D is very low. In my test program and with mplayer, I get approximately 8 fps in 720x576 on my Radeon 7500 with texmem-branch from a couple of weeks ago. glDrawPixels is equally slow. I assume glTexSubImage2D is supposed to be able to process realtime video, since it handles extensions like EXT_422_pixels (for 4:2:2 Y'CbCr) and EXT_interlace. Using OpenGL for streaming video is useful for creating nonlinear video editing applications (I think Apple's Shake use OpenGL), because you will be able to preview many of the most common effects in realtime. Is there any work in progress to make texture sub-image uploading faster? Which changes need to be done? --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
Morten Hustveit wrote: Currently, the performance when streaming video through glTexSubImage2D is very low. In my test program and with mplayer, I get approximately 8 fps in 720x576 on my Radeon 7500 with texmem-branch from a couple of weeks ago. glDrawPixels is equally slow. I assume glTexSubImage2D is supposed to be able to process realtime video, since it handles extensions like EXT_422_pixels (for 4:2:2 Y'CbCr) and EXT_interlace. Using OpenGL for streaming video is useful for creating nonlinear video editing applications (I think Apple's Shake use OpenGL), because you will be able to preview many of the most common effects in realtime. Is there any work in progress to make texture sub-image uploading faster? Which changes need to be done? Morten, The R200 driver supports an AGP allocator, but that's for the Radeon 8500 and 9000. You would need to port the allocator (APPLE_client_storage) to the Radeon driver if you wanted to use it on the Radeon 7500. Regards, Jens -- /\ Jens Owen/ \/\ _ [EMAIL PROTECTED] /\ \ \ Steamboat Springs, Colorado --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
Morten Hustveit wrote: Currently, the performance when streaming video through glTexSubImage2D is very low. In my test program and with mplayer, I get approximately 8 fps in 720x576 on my Radeon 7500 with texmem-branch from a couple of weeks ago. glDrawPixels is equally slow. I assume glTexSubImage2D is supposed to be able to process realtime video, since it handles extensions like EXT_422_pixels (for 4:2:2 Y'CbCr) and EXT_interlace. Using OpenGL for streaming video is useful for creating nonlinear video editing applications (I think Apple's Shake use OpenGL), because you will be able to preview many of the most common effects in realtime. Is there any work in progress to make texture sub-image uploading faster? Which changes need to be done? There are two typical ways to go about imporving texture upload performance in OpenGL applications. One is through the use of OpenGL extensions. There are several extensions available (or available any day now) to help this process. NV_pixel_data_range and APPLE_client_storage are the two most directly applicable. Neither of these two is /generally/ available in DRI. There is a version of NV_vertex_array_range in the R200 (and R100?) driver that can be used with APPLE_client_storage for texture data. http://oss.sgi.com/projects/ogl-sample/registry/NV/pixel_data_range.txt http://oss.sgi.com/projects/ogl-sample/registry/APPLE/client_storage.txt Jeff Hartmann and I are in the process of designing a COMPLETE replacement of the memory management system for DRI. This re-work should allow for a full, proper implementation of APPLE_client_storage. It's going to take a lot of work, though. The way that APPLE_client_storage is implemented in MacOS X is the application mallocs memory for textures and the system dynamically maps those pages into the AGP aperture. This would be very difficult on x86, but I think Jeff has thought of a different way to get the same effect. There is another extension from the ARB that should be available, literally, any day now to accelerate the process of uploading vertex data (it's a replacement for NV_vertex_array_range & ATI_vertex_array_object). John Carmack made brief mention of it in his recent plan update. As a follow on, there will likely be a version for texture data very soon. I plan to have both these extensions implemented in DRI as part of the memory managment re-write. My personal opinion is that NV_*_range will universally go away after ARB_vertex_buffer_object gains ground. There are too many pitfalls with them for general use, especially WRT software fallbacks. The slow software path becomes even slower if the application "optimizes" by putting data in AGP or on-card memory. :P The other way to speed-up texture upload performance is to double-buffer the textures in side the driver. The straight forward way to implement texture updates is to wait for rendering that may be using the texture to finish, then modify the texture data in place. If I'm not mistaken, this is how DRI works. The optimization is to allocate a new texture buffer if the texture has in-flight rendering. This should be doable in the current implementation, but the implementation would be non-trivial. Basically, you'd have to add a way to track if a texture has in-flight rendering. In the TexSubImage functions for each driver you'd need to add code to detect this case. In this case the "old" driTextureObject would need to be added to a list of "dead" texture object (to be released when their rendering is done), and a new driTextureObject would need to be allocated. Periodically objects in the dead list would need to be checked and, if their rendering is complete, freed. That's the 10,000 mile over-view. There's probably some other cases I'm missing. It might also be possible to implement most of this in a device independent way, but I would do it in a single driver first. I think the tough part will be getting the fencing right. If you (or anyone else!!!) would be interested in working on this, we can talk about it more in next Monday's #dri-devel meeting. --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] Date a lonely housewife tonight! 8824WlZP0-533oWEb0996UuUP4-93-27
Check out marriedbutyetalone!] Married But Lonely Married But Lonely is an exclusive site of lonely women looking for sex dates with you! We have thousands of married women listed nationwide and in over 10 countries! Plus, you get an adult cyberpass which automatically and instantaneously makes you a member of over 300,000 of the Internet's finest premium X-rated adult sites. Push here enter now! Check out marriedbutyetalone! Check out marriedbutyetalone! Check out marriedbutyetalone! Check out marriedbutyetalone! Check out marriedbutyetalone! Check out marriedbutyetalone! Can't wait to hook up? Talk with a live, HOT lady RIGHT NOW ! 1-800-463-CUMM credit card required, from only $1.99/minute I bought a list of single men, if this isn't you just email [EMAIL PROTECTED] and I'll get you off :) 3626uFgY6-4l10 --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] pomparison waterline
ppedhchskhkt exrcylsn
[Dri-devel] texmem-0-0-1 branch
I've been looking at the texmem branch and have found a few problems. I
wanted to explain the problems I found and get some feedback before
commiting any changes or putting a patch together.
The first item is the cause of the texture corruption with multiple
contexts that I have seen with both Radeon and Rage 128. In the
DRI_AGE_TEXTURES macro in texmem.h, the age test is reversed:
#define DRI_AGE_TEXTURES( heap )\
do { \
if ( ((heap) != NULL)\
&& ((heap)->local_age > (heap)->global_age[0]) )\
driAgeTextures( heap ); \
} while( 0 )
Here global_age[0] would be _larger_ than local_age after a context's
textures have been kicked out (global_age is incremented at each upload).
Changing the test to != or < fixes the problem. However, there is another
problem here. In the old driver-specific code, the local age was a signed
int, which was initialized to -1, and the global age was/is initialized to
0 (by memset-ing the SAREA). This caused the first context in a new
server generation to initialize the global LRU since the local and global
ages differed. With both now being initially 0, the global LRU is not
initialized before it is updated when the first texture upload happens.
One workaround would be to use local != global in the age test and
initialize the local age to anything > 0 if the global age is 0 when the
context is created.
Also, I think it would cause less confusion if the drivers' SAREAs used
unsigned ints where the pointers in the heap/region structs in texmem.h
do. It should be safe to assume sizeof(int) == sizeof(unsigned int),
right? Actually, even better would be if all the drivers used the
drmTextureRegion struct (mga does) defined in xf86drm.h, since the code to
manipulate the struct is now being made common to all drivers. The
difference between that struct and the current driver private versions is
that drmTextureRegion uses an explicit padding byte between the first
three bytes (unsigned chars) and the age (unsigned int). Would there be
compatibility problems in moving from a struct without the explicit
padding to one with it? :
typedef struct _drmTextureRegion {
unsigned char next;
unsigned char prev;
unsigned char in_use;
unsigned char padding; /* Explicitly pad this out */
unsigned int age;
} drmTextureRegion, *drmTextureRegionPtr;
vs.
typedef struct {
unsigned char next, prev; /* indices to form a circular LRU */
unsigned char in_use; /* owned by a client, or free? */
int age;/* tracked by clients to update local LRU's */
} radeon_tex_region_t;
There are also a few failing assertions related to placeholder texture
objects. In driTexturesGone, we need to set t->heap = heap or else the
assertion that t->heap != NULL fails in driDestroyTextureObject when
destroying a placeholder. The other assertions that I don't think are
valid are in the drivers' DestroyContext, where it's assumed that the
swapped list and texture object lists are empty. Even if Mesa calls the
driver's DeleteTexture for all the real texture objects at context
teardown (does this happen?), there may still be placeholder objects on
the swapped or texture_objects lists. I think it is safer to have
driDestroyTextureHeap iterate both lists and destroy any remaining texture
objects and remove the assertions.
The last problem I found is the drivers' call to driCreateTextureHeap in
CreateContext. Passing pointers to the texList and texAge arrays without
an index results in texList[0] and texAge[0] being passed for both texture
heaps (if the driver supports the AGP heap), so those should be indexed as
texList[i] and texAge[i] where i is the heapId.
Also, I think there's a small optimization we could make, but I need to
make sure this is valid. I observed two contexts (texobj and tunnel
demos) repeatedly deleting and creating the same placeholder in
driTexturesGone when switching contexts. I think it would be possible to
keep an existing placeholder if the offset and size of the placeholder in
the texture_objects list matches the global region which has been updated.
For debugging the LRUs, I added the printLocal/GlobalLRU functions from
the old driver code to texmem.c in my tree. I think it's useful to have
these at least as static functions to use for debugging the common texmem
code.
At any rate, I can put a patch together for review, but I wanted to see if
there's anything I'm missing here.
--
Leif Delgass
http://www.retinalburn.net
---
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lis
Re: [Dri-devel] Streaming video through glTexSubImage2D
On Fri, 31 Jan 2003, Arkadi Shishlov wrote: > On Fri, Jan 31, 2003 at 10:26:13AM -0800, Ian Romanick wrote: > > There are two typical ways to go about imporving texture upload > > performance in OpenGL applications. One is through the use of OpenGL > > extensions. There are several extensions available (or available any > > You are talking about extensions here, but my P3 600MHz Radeon8500 box > with ATI binary drivers is able to push normal frame rates in MPlayer > with 720x480 movies with OpenGL output driver at 80% CPU load. > 30% with XVideo. > It use regular glTexSubImage2D, so it is either R100 or DRI beign slow > in this case (if CPU is powerful enough). > I don't know much about extensions you mentioned, but how much you'll > save with MPlayer? One memcpy() (assuming it doesn't wait for texture > upload)? Actually, iirc, all the drivers actually implement glTexSubImage2D the same way as glTexImage2D. They always upload the entire texture image -- there was a comment I remeber seeing about the subimage index calculations being wrong. Fixing this to only upload the subimage would help the performance of glTexSubImage2D. -- Leif Delgass http://www.retinalburn.net --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
On Fri, Jan 31, 2003 at 04:33:36PM -0500, Leif Delgass wrote: > Actually, iirc, all the drivers actually implement glTexSubImage2D the > same way as glTexImage2D. They always upload the entire texture image -- > there was a comment I remeber seeing about the subimage index calculations > being wrong. Fixing this to only upload the subimage would help the > performance of glTexSubImage2D. I think it doesn't make any difference with MPLayer, it replace whole texture for every frame (there is draw_slice() in libvo/vo_gl.c, but I doubt it is used too much; possible source of low performance with DRI?). It always upload in RGB format, so probably much of the CPU is spent in yuv2rgb(). How glTexSubImage2D can upload full texture? The original source is gone, does it keep a copy internally? arkadi. --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
On Fri, Jan 31, 2003 at 10:26:13AM -0800, Ian Romanick wrote: > There are two typical ways to go about imporving texture upload > performance in OpenGL applications. One is through the use of OpenGL > extensions. There are several extensions available (or available any You are talking about extensions here, but my P3 600MHz Radeon8500 box with ATI binary drivers is able to push normal frame rates in MPlayer with 720x480 movies with OpenGL output driver at 80% CPU load. 30% with XVideo. It use regular glTexSubImage2D, so it is either R100 or DRI beign slow in this case (if CPU is powerful enough). I don't know much about extensions you mentioned, but how much you'll save with MPlayer? One memcpy() (assuming it doesn't wait for texture upload)? arkadi. --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] >>15.3 MILLION OPT-IN EMAIL ADDRESSES...PLUS $2,000 IN FREE EMAIL MARKETING SOFTWARE!
WOULD YOU LIKE TO HAVE YOUR MESSAGE SEEN BY OVER 15.3 MILLION OPT-IN, TARGETED PROSPECTS DAILY? Below contains all the information you will ever need to market your product or service on the Internet. If you have a product, service, or message that you would like to get out to Thousands, Hundreds of Thousands, or even Millions of people, you have several options. Traditional methods include print advertising, direct mail, radio, and television advertising. They are all effective, but they all have two catches: They're EXPENSIVE and TIME CONSUMING. Not only that, you only get ONE SHOT at making your message heard by the right people. Also, Internet Search Engine Submissions, Classified Ads, Newsgroup Postings simply DO NOT WORK effectively. Now this has all changed! Thanks to the top programmers in the world and their NEW EMAIL TECHNOLOGY, You can send millions of email messages daily for FREE...Without getting terminated from your current Internet connection! It's very simple to do and you can be increasing your sales within minutes of installing this new extraordinary software! Besides...It's the only real way to advertise on the Internet that works...Period! >>>WE WILL SUPPLY YOU WITH OVER 15.3 MILLION OPT-IN EMAIL ADDRESSES TO GET YOU STARTED RIGHT AWAY! >>>PLUS FREE EMAIL ADDRESS DOWNLOADS FOR LIFE! >>>ALSO, YOU WILL RECEIVE $2,000 WORTH OF EMAIL MARKETING SOFTWARE FREE! Including.. BROADCAST EMAIL SENDING SOFTWARE...(send millions of email advertisements daily with a few clicks of your mouse, without getting your ISP trerminated. We used the same software to send you this email) EMAIL EXTRACTION SOFTWARE...(retrieve new targeted email addresses daily. Hundreds of thousands of them) LIST MANAGEMENT SOFTWARE...(keep your lists clean, opt-in and manage all your remove requests, leads, sales etc...) and much...much more! Hurry...This extraordinary offer ends soon! To find out more information, Do not respond by email. Instead, click on the link below or copy and paste the exact web site address below into your web browser. http://www.vvorldvvideventures.com/504305/addresses.htm __ Want to be removed from our email list? You were sent this email because you used our Opt-in service. We hope you enjoy reading our messages. However, if you'd rather not receive future e-mails from us, Click on the link below http://www.vvorldvvideventures.com/504305/remove.htm Thank you for your cooperation ___ --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
On Fri, 31 Jan 2003, Arkadi Shishlov wrote: > On Fri, Jan 31, 2003 at 04:33:36PM -0500, Leif Delgass wrote: > > Actually, iirc, all the drivers actually implement glTexSubImage2D the > > same way as glTexImage2D. They always upload the entire texture image -- > > there was a comment I remeber seeing about the subimage index calculations > > being wrong. Fixing this to only upload the subimage would help the > > performance of glTexSubImage2D. > > I think it doesn't make any difference with MPLayer, it replace whole > texture for every frame (there is draw_slice() in libvo/vo_gl.c, but I > doubt it is used too much; possible source of low performance with > DRI?). It always upload in RGB format, so probably much of the CPU is > spent in yuv2rgb(). You're probably right, in most apps it likely wouldn't have a large impact. The extensions that Ian described are going to have more of an effect. > How glTexSubImage2D can upload full texture? The original source is > gone, does it keep a copy internally? Yes, the Mesa drivers currently keep a copy of all textures in system memory, but this is one of the things that could change with a new AGP/texture management scheme. -- Leif Delgass http://www.retinalburn.net --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
Arkadi Shishlov wrote:
On Fri, Jan 31, 2003 at 10:26:13AM -0800, Ian Romanick wrote:
There are two typical ways to go about imporving texture upload
performance in OpenGL applications. One is through the use of OpenGL
extensions. There are several extensions available (or available any
You are talking about extensions here, but my P3 600MHz Radeon8500 box
with ATI binary drivers is able to push normal frame rates in MPlayer
with 720x480 movies with OpenGL output driver at 80% CPU load.
30% with XVideo.
It use regular glTexSubImage2D, so it is either R100 or DRI beign slow
in this case (if CPU is powerful enough).
I'm 99% sure that the ATI driver multi-buffers textures. This was the
second technique that I mentioned in my post to improve texture upload
performance. There are probably other ways to pipeline texture uploads,
but the DRI doesn't use any of them. My guess is that if you profiled
it you would see that most of the wall clock time is spent waiting for
the rendering pipe to flush.
I believe that this problem is the reason the guys at Tungsten
implemented NV_vertex_array_range and the simplified version of
APPLE_client_storage. The "real" fix is going to be a LOT of work. The
current sollution is a good stop-gap method, though. I would suggest
modifying MPlayer to use the var+client_storage work-around, and then
help us implement the long-term fix. :)
I don't know much about extensions you mentioned, but how much you'll
save with MPlayer? One memcpy() (assuming it doesn't wait for texture
upload)?
It depends. Using a "real" implementation of APPLE_client_storage, your
main loop would like something like the following, and it there would be
little or no waiting and no copies. This loop would actually require
APPLE_fence, but that would be fairly trivial.
The trick is that when you use a texture the driver uses pages from your
memory space as pages for the AGP aperture. I don't know exactly what
they've done, but I know that Apple has gone to some great lengths to
optimize this path.
struct {
GLuint texture_id;
GLuint fence_id;
void * buffer;
} texture_ring[ MAX_TEXTURES ];
foo( ... )
{
/* Allocate memory, texture IDs, and fence IDs for the ring. */
i = 0;
while ( ! done ) {
glFinishObjectAPPLE( texture_ring[i].fence );
decode_video_frame( texture_ring[i].buffer );
glBindTexture( GL_TEXTURE_2D, texture_ring[i].texture_id );
/* Render with the texture. */
i = (i + 1) % MAX_TEXTURES;
}
/* Destroy textures, free memory, etc. */
}
---
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] texmem-0-0-1 branch
Leif Delgass wrote:
I've been looking at the texmem branch and have found a few problems. I
wanted to explain the problems I found and get some feedback before
commiting any changes or putting a patch together.
Thanks for looking over the code. Everyone's been so busy lately that I
don't think many people have had much of a chance to delve into it much.
The first item is the cause of the texture corruption with multiple
contexts that I have seen with both Radeon and Rage 128. In the
DRI_AGE_TEXTURES macro in texmem.h, the age test is reversed:
#define DRI_AGE_TEXTURES( heap )\
do {\
if ( ((heap) != NULL) \
&& ((heap)->local_age > (heap)->global_age[0]) ) \
driAgeTextures( heap );\
} while( 0 )
Here global_age[0] would be _larger_ than local_age after a context's
textures have been kicked out (global_age is incremented at each upload).
Changing the test to != or < fixes the problem. However, there is another
problem here. In the old driver-specific code, the local age was a signed
int, which was initialized to -1, and the global age was/is initialized to
0 (by memset-ing the SAREA). This caused the first context in a new
server generation to initialize the global LRU since the local and global
ages differed. With both now being initially 0, the global LRU is not
initialized before it is updated when the first texture upload happens.
One workaround would be to use local != global in the age test and
initialize the local age to anything > 0 if the global age is 0 when the
context is created.
I believe your analysis is essentially correct. Interestingly, the test
in driAgeTextures IS correct. As far as initializing the global LRU
goes, when there are no textures allocated, there is nothing in the LRU.
Also, I think it would cause less confusion if the drivers' SAREAs used
unsigned ints where the pointers in the heap/region structs in texmem.h
do. It should be safe to assume sizeof(int) == sizeof(unsigned int),
right? Actually, even better would be if all the drivers used the
drmTextureRegion struct (mga does) defined in xf86drm.h, since the code to
manipulate the struct is now being made common to all drivers. The
difference between that struct and the current driver private versions is
that drmTextureRegion uses an explicit padding byte between the first
three bytes (unsigned chars) and the age (unsigned int). Would there be
compatibility problems in moving from a struct without the explicit
padding to one with it? :
typedef struct _drmTextureRegion {
unsigned char next;
unsigned char prev;
unsigned char in_use;
unsigned char padding; /* Explicitly pad this out */
unsigned int age;
} drmTextureRegion, *drmTextureRegionPtr;
vs.
typedef struct {
unsigned char next, prev; /* indices to form a circular LRU */
unsigned char in_use; /* owned by a client, or free? */
int age; /* tracked by clients to update local LRU's */
} radeon_tex_region_t;
I cannot think of a case on any of the supported architectures where
sizeof( unsigned int ) != sizeof( int ). I'm not sure that ANSI C99
permits such an implementation.
As far as replacing the device specific *_tex_region_t structures with
drmTextureRegion, that would probably be a good idea. In fact,
dri_texture_region (in texmem.h) should be replaced as well. I didn't
notice drmTextureRegion when I was doing that part of the work (about 10
months ago).
There are also a few failing assertions related to placeholder texture
objects. In driTexturesGone, we need to set t->heap = heap or else the
assertion that t->heap != NULL fails in driDestroyTextureObject when
destroying a placeholder. The other assertions that I don't think are
How could a texture get into a heap's list that was not allocated from
that heap? If such a case exists, it is a bug.
valid are in the drivers' DestroyContext, where it's assumed that the
swapped list and texture object lists are empty. Even if Mesa calls the
driver's DeleteTexture for all the real texture objects at context
teardown (does this happen?), there may still be placeholder objects on
the swapped or texture_objects lists. I think it is safer to have
driDestroyTextureHeap iterate both lists and destroy any remaining texture
objects and remove the assertions.
Mesa is supposed to destroy all the real textures before it gets to
DestroyContext. The textures on the swapped list and on each heap's
texture_objects list are textures that had actual texture data / memory
associated with them. The place holder textures should NEVER be on any
of these lists.
The last problem I found is the drivers' call to driCreateTextureHeap in
CreateContext. Passing pointers to the texList and texAge arrays without
an index results in texList[0] and texAge[0] being passed for both texture
heaps (if the driver supports the AGP heap), so those should be indexed as
texList[i] and texAge[i] where i is the heapId.
I believe you ar
Re: [Dri-devel] texmem-0-0-1 branch
Leif Delgass wrote: There are also a few failing assertions related to placeholder texture objects. In driTexturesGone, we need to set t->heap = heap or else the assertion that t->heap != NULL fails in driDestroyTextureObject when destroying a placeholder. I see the problem you're talking about here. You are correct. I misunderstood where you meant to set t->heap to heap. I now see that you meant to set it after the CALLOC in the 'if ( in_use )' block. Duh. :) --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Streaming video through glTexSubImage2D
If texture is locked in DRI while rendering, the straghforward solution is to allocate new texture for every frame or reuse old one from the ring. The new texture will be free, one application thread can be dedicated to push texture to AGP and than into the card, while another threads can decode the video, eliminating stall. The method you described is just async io, the only advantage is that there is no copy app mem -> AGP mem. So multu-buffer is not an advantage over multithreaded approach (besides of sheduling overhead) but use of DMA'able memory directly is. Did I understood it correctly? Probably you can advice MPlayer HQ to use multiple textures in a ring to speedup mplayer on DRI (in case original problem poster system is not CPU bound)? arkadi. --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] texmem-0-0-1 branch
On Fri, 31 Jan 2003, Ian Romanick wrote: > Leif Delgass wrote: > > > There are also a few failing assertions related to placeholder texture > > objects. In driTexturesGone, we need to set t->heap = heap or else the > > assertion that t->heap != NULL fails in driDestroyTextureObject when > > destroying a placeholder. > > I see the problem you're talking about here. You are correct. I > misunderstood where you meant to set t->heap to heap. I now see that > you meant to set it after the CALLOC in the 'if ( in_use )' block. Duh. :) Right. I should have been more specific there (the actual patch would have been more clear ;) ). Do you see what I mean now about the assertions on tearing down the context? A placeholder _does_ have a memBlock, which matches the corresponding global region marked as in use. It's in the same block you refer to here that it's added to the texture_objects list. Also, a placeholder can be moved to the swapped_objects list in driAllocateTexture (another place an assertion can fail in driSwapOutTextureObject if the placeholder's t->heap == NULL). The driver's DeleteTexture callbacks on teardown won't remove any placeholders from these lists, just the "real" textures that have corresponding gl_texture_object structs (t->tObj). -- Leif Delgass http://www.retinalburn.net --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] Mach64 / Rage128 texture compression update
While I was searching around the net for papers on texture memory management, I came across some references to Talisman and DirectX 6.0 texture compression. It seems that the compression algorithm used is called "TREC," which is short for "Texture and Rendering Compression." http://www.ubicom.tudelft.nl/docs/UbiCom-TechnicalReport_1998_6.PDF http://research.microsoft.com/MSRSIGGRAPH/96/Talisman.htm Apparently, it is some sort of DCT based compression scheme. That would explain why ATI is the only company to ever implement it in hardware. :) --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Mach64 / Rage128 texture compression update
According to the docs, mach64 implements 4:1 VQ de-compression, but there's no other info. Rage 128 also has a VQ texture format bit, according to the register header file. Sega dreamcast used a form of VQ compression also, I think. On Fri, 31 Jan 2003, Ian Romanick wrote: > While I was searching around the net for papers on texture memory > management, I came across some references to Talisman and DirectX 6.0 > texture compression. It seems that the compression algorithm used is > called "TREC," which is short for "Texture and Rendering Compression." > > http://www.ubicom.tudelft.nl/docs/UbiCom-TechnicalReport_1998_6.PDF > http://research.microsoft.com/MSRSIGGRAPH/96/Talisman.htm > > Apparently, it is some sort of DCT based compression scheme. That would > explain why ATI is the only company to ever implement it in hardware. :) > > > > --- > This SF.NET email is sponsored by: > SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! > http://www.vasoftware.com > ___ > Dri-devel mailing list > [EMAIL PROTECTED] > https://lists.sourceforge.net/lists/listinfo/dri-devel > -- Leif Delgass http://www.retinalburn.net --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] texmem-0-0-1 branch
Leif Delgass wrote: On Fri, 31 Jan 2003, Ian Romanick wrote: Leif Delgass wrote: There are also a few failing assertions related to placeholder texture objects. In driTexturesGone, we need to set t->heap = heap or else the assertion that t->heap != NULL fails in driDestroyTextureObject when destroying a placeholder. I see the problem you're talking about here. You are correct. I misunderstood where you meant to set t->heap to heap. I now see that you meant to set it after the CALLOC in the 'if ( in_use )' block. Duh. :) Right. I should have been more specific there (the actual patch would have been more clear ;) ). Do you see what I mean now about the Yeah. You know what they say...a patch is worth 2^10 words. :) assertions on tearing down the context? A placeholder _does_ have a memBlock, which matches the corresponding global region marked as in use. It's in the same block you refer to here that it's added to the texture_objects list. Also, a placeholder can be moved to the swapped_objects list in driAllocateTexture (another place an assertion can fail in driSwapOutTextureObject if the placeholder's t->heap == NULL). The driver's DeleteTexture callbacks on teardown won't remove any placeholders from these lists, just the "real" textures that have corresponding gl_texture_object structs (t->tObj). Part of the confusion was over "placeholder." There is a dummy object bound to each texture target (see driInitTextureObjects in texmem.c, line ~908). When you said placeholder, this is the the object that my brain thought of. You are correct. The objects that are actually called placeholders can be on the the texture_objects list, but I don't think it should ever be on the swapped_objects list. It seems illogical that one process would track the textures in another process that were kicked out. That's just silly. That said, it does seem that this will happen. It seems like this could be a memory leak. The placeholder textures on the swapped_objects list will just sit there until the context is destroyed. I think we need to add a flag to dri_texture_object that says "this is a placeholder." If an object is a placeholder, it does not get put on the swapped_objects lists. Instead, it gets destroyed. That flag could also be used to help implement the optimization you were talking about in an earlier message. After looking at all the code, I couldn't understand why none of those assertions were never triggered. Here's the coup de grace: radeonDestroyContext never gets called. I tried a bunch of different things, but if I set a breakpoint after hitting the _mesa_test_os_sse_exception_support signal, my breakpoint never gets hit. I'm going to have to investigate this some more... --- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] yesson aitack
Coral Calcium will change your Life! Over 157 diseases are caused by calcium deficiency. Recent research links appropriate calcium levels with the process of aging as well as many degenerative diseases such as: Diabetes , Arthritis , Heart Disease, Osteoporosis ,Eczema Alzheimer's Disease, Fibromyalgia , High Cholesterol Muscle Cramps, Kidney Stones , Gallstones , Gout Indigestion, Chronic Fatigue Syndrome , Lupus Hiatal Hernia, Hypertension , Headaches Click Here for an All Natural Way to a Healthier Life!100% Satisfaction Guaranteed Click Here to stop receiving future offers> yrffhbkmtfjdv n nvaxvxmaimrhyzzy csswuxsyc kway pj azz fnqqr
[Dri-devel]
marivel,
IexeakuasohivpRigmeotaeeixiptáÄ
4DÞ¨¥Ë)¢{(ç[É*.Ç
¢¸{^®â±áبâ3ââìV¢¹]J¶§dzm§ÿðÃÚ²íÁªÞr®'^½éfj)b b²ÐëׯzYb²Û,¢êÜyú+éÞ¶m¦Ïÿ+-²Ê.Ç¢¸ë+-³ùb²Ø§~Ý®'^½é
