On Fri, 31 Aug 2012 17:50:01 -0500 Nick Reed <[email protected]> said:

> 
> On Aug 30, 2012, at 5:16 PM, Carsten Haitzler (The Rasterman)
> <[email protected]> wrote:
> 
> > On Thu, 30 Aug 2012 13:40:09 -0500 Nick Reed <[email protected]> said:
> > 
> >> I'm running E17 with X11 on a TI DM8148 (SGX 530) platform with OpenGL
> >> compositing enabled. We're using hardware acceleration available on the SOC
> >> to do H.264 video decode through Gstreamer, and need to render this video
> >> on the display along with a graphical overlay from X.  We can use the
> >> hardware video layer (which takes care of color conversion in hardware)
> >> and a color key to enable transparency between the E17/X11 framebuffer and
> >> the video layer in the background.  The graphics hardware also supports
> >> per pixel alpha blending between layers, which we would prefer over a
> >> simple color key.
> >> 
> >> Is there a way I can convince E17 in the presence of a compositor to let me
> >> write alpha values to the frame buffer? Should I abandon the hardware video
> >> layer and move to Emotion?
> > 
> > no. you can't. because x11 has no concept of a alpha mask for the fb. it
> > doesn't exist. it may be pure LUCK that it happens to work. most of these hw
> > alpha masks are non-premultiplied rgba, but allof x1's rgba space is
> > premultimplied (xrender, and thus all compositors too). you have no way to
> > find out if such an alpha mask exists or what kind of colorspace it is.
> > 
> > then you get the bonus fun - which regions of a window contain a video and
> > which don't? which do you blend onto a window below and which do you treat
> > as conceptually solud but the regions alpha pixels are to be "copied to the
> > fb" (and assuming which rgba alpha colorspace?). again - doesn't exist in
> > x11. all you have in x11 is xv + colorkey.
> > 
> > as such emotion supports using xv "transparently" if no objects are on top
> > of the video, and falling back to "textured video" otherwise (gst pipeline
> > should work here. not sure about xine, generic/vlc). so you can get
> > acceleration for unobscured video, and auto-fallback if its obscured by
> > objects on top in the canvas (i think it doesnt handle other windows
> > obscuring though so ymmv here).
> > 
> > -- 
> > ------------- Codito, ergo sum - "I code, therefore I am" --------------
> > The Rasterman (Carsten Haitzler)    [email protected]
> > 
> Thanks for the quick reply. I wasn't sure if the compositor handled 32 bit
> frame buffers differently already and I could hint a window to composite
> differently with only a bit of hacking for this and similar platforms. Maybe
> just customize a few shaders in the OpenGL compositing engine. For my
> application, having something work by luck would have been sufficient.
> 
> It sounds like any way that could possible be implemented would be more
> complicated than, and work only as well as color keying.  It's a shame that
> it isn't easier to make use of the multiple frame buffers and video layers on
> some of these SOCs.

this is just a limitation of x11. it could be extended, but to date has not
been. :) it could be done as a hack but x doesnt let us know what these extra
bits are in the fb - are the an alpha mask at all? premultiplied? not
premultiplied? how many bits? (often you have such masks but they come only as
4bit, not 8). also there is nothing to change in the compositor - it has zero
opengl code. all is done inside evas. evas does all the rendering for the
compositor. so evas would need to know to treat the alpha bits specially. it
CAN generate an alpha mask - but not the kind you want. the alpha mask it
generates is premultiplied alpha and is simply the same one you'd have for an
ARGB window. it doesn't have any idea how to "cut holes in the fb" to see
underlayed content beneath. this whole "concept" would need to be added in.
that is assuming ecore-evas can figure out what kind of alpha mask and how many
bits (from x11). sure - we could make it a configuration thing instead, though
much less clean and then needing config setup per piece of hardware. and then
we'd need to transport the video layer regions from client app to its window
as properties then to compositor etc. etc. and apps must suppport this then (or
toolkits).

as mentioned, emotion does a simplified version of this via xv and relies on
colorkeys then for the video and it being unobscured. it falls back to a
textured path when covered thus dropping framerate (maybe) but still working.

-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    [email protected]


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
enlightenment-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/enlightenment-users

Reply via email to