Around 13 o'clock on Oct 5, "Roberto Peon" wrote:

> The application (for the company) is:
> Real time video into (main memory) RGBA surface from SDI based video card
> Blit from that surface to FB in accellerator.
> Render with dest alpha in the accellerator.
> blit from the FB surface to the SDI based video card for SDI output.

One possible solution would be to use just core 2D operations instead of 
trying to figure out how to make DRI/GL do this quickly:

The Render extension will do the compositing operation in the hardware.  
Create a shared memory pixmap holding the RGBA surface and then use 
XRenderComposite to blend that to the screen.  Then just use XCopyArea
to get the screen contents back to another shared memory pixmap on the SDI
output card.  The Render extension implementation is missing pieces needed 
to do image transformations across this step, but that wouldn't be hard to 
flesh out, and you can experiment with untransformed images to see how it 
works.

The only significant performance problem here is in the transfer of data
from the video card to the SDI output card.  You might need to build a
kernel driver to handle DMA from the MGA card for CopyArea, I believe the
card is capable of doing that.  That would be generally useful for X
though, and should be relatively straightforward.  I'd give it a try 
without the DMA code to see how well it works; it might be fast enough.

[EMAIL PROTECTED]        XFree86 Core Team              SuSE, Inc.



_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to