On Wednesday 19 March 2008 16:26:37 Zack Rusin wrote:
> On Wednesday 19 March 2008 05:21:43 am Tom Cooksey wrote:
> > > > 2) Sortof related to the above... it would be very cool to have a very
> > > > simple drawing API to use on top of the modesetting API. A simple blit
> > > > & solid fill would surfice. I've always found it odd that the internal
> > > > kernal API for framebuffer devices includes blit and solid fill, but
> > > > that API is never exposed to user-space applications - even though it
> > > > would be _very_ useful.
> > >
> > > I don't think this will be done. Main reason is that newer hw is hard to
> > > program, no 2d anymore so you have to program the whole 3d pipeline stuff
> > > and we don't want such code in kernel.
> > >
> > > So the idea here is to use one userspace driver like gallium3d. Basicly
> > > you do a winsys for your use and you can also do a new frontend for
> > > gallium other than a GL frontend or wait for new frontend to appear :)
> >
> > Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon
> > be renamed to glcon? :-) Also, while 2D is dissappearing on desktop, it's
> > very much alive on embedded, for the moment at least. 
> 
> That depends on your definition of "embedded". 
> I think what you're referring to are dummy framebuffers or gpu's that were 
> made with some absolutely silly requirements like "no known bugs" policy 
> which implies that all they have is an underperforming 2D engine. In both of 
> those cases you already lost. So if you're trying to accelerate or design 
> framework based on those than honestly you can just give up and go with an 
> all software framework. If you're referring to actual embedded GPU's the 
> current generation is actually already fully programmable and if you're 
> designing with those in mind than what Jerome said holds.

I was initially thinking about low-end graphics hardware, which are mainly just 
dummy 
framebuffers as you say. However, I've thought some more about this and there's 
still 
set-top-box type hardware here, which need to decode full resolution HD video 
(1920×1080 or even 3840×2160). Typically this is off-loaded onto a dedicated 
DSP. 
E.g. TI's DaVinci platform manages to do Full-HD resolution h.264 decoding in a 
~2W 
power envolope. I believe the video is composited with a normal framebuffer 
(for the UI) 
in hardware. I don't think there's any programmable 3D hardware avaliable which 
can 
do 1920×1080 resolutions in a 2W power envelope. So even if they replace the 
linear 
framebuffer with a programmable 3D core, that core still needs to render at 
[EMAIL PROTECTED] fps without impacting the 2W power draw too much. I guess it 
will 
probably be possible in 5 years or so, but it's not possible now.


> > I can't see fbdev 
> > going anytime soon if the only replacement is a full-blown programmable 3D
> > driver architecture. Perhaps a new, simple API could be created. On desktop
> > it would be implemented as a new front-end API for gallium and on embedded
> > it would be implemented using a thin user-space wrapper to the kernel
> > module? 
> 
> I don't think that makes a lot of sense. Gallium3D is an interface to 
> hardware - it models the way modern graphics hardware works. Front-end's in 
> the Gallium3D sense are the state-trackers that are used by the API that 
> you're trying to accelerate. So if your hardware is an actual GPU that you 
> can write a Gallium3D driver for, than front-end for it would be just another 
> api (you could be just using GL at this point).

I guess what I was thinking about was a single API which can be used on 3D-less
(or legacy, if you want) hardware and on modern hardware. If the graphics 
hardware
is a simple pointer to a main-memory buffer which is scanned out to the 
display, then
your right, you might as well just use user-space shared memory, as we 
currently do.
A new API would only be useful for devices with video memory and a hardware 
blitter.
There are still new devices coming out with this kind of hardware, the Marvel 
PXA3x0 
and Freescale i.MX27 for example spring to mind.

I'm still a bit confused about what's meant to be displayed during the boot 
process,
before the root fs is mounted. Will the gallium libraries & drivers need to be 
in the
initramfs? If not, what shows the splash screen & provides single-user access 
if 
anything goes wrong in the boot process?


> > A bit like what DirectFB started life as (before it started trying 
> > to be X).
> 
> Well, that's what you end up with when you start adding things that you need 
> across devices. I know that in the beginning when you look at the stack you 
> tend to think "this could be a lot smaller!", but then with time you realize 
> that you actually need all of those things but instead of optimizing the 
> parts that were there you went some custom solution and are now stuck with 
> it.

I was refering here to DirectFB's window management, input device abstraction,
audio interface abstraction & video streaming APIs. Personally, I believe there 
is
a requirement for a simple, get-pixels-on-the-screen user-space API which 
provides 
slightly more than fbdev, but a lot less than full GL. If _all_ modern hardware 
needs
user-space 3D graphics drivers, then I agree, you might as well take advantage 
of
the full GL API. 


> All in all I don't think what you're thinking about doing is going to work. 
> You won't be able to accelerate Qt vector graphics framework with the devices 
> that you're thinking about writing all this for so I don't think it's a time 
> well spent. Sure drawLine or drawRectangle will be a lot faster but the 
> number of UIs that can be writen with a combination of those is so ugly and 
> unattractive it's a waste of time. 

I'm not talking about accelerating things like drawRectangle. I'm talking about 
a simple API for blitting BOs. For Qt, we'd use the raster paint engine to draw
top-level windows into buffer objects. The server then blits those buffer 
objects
to the framebuffer. For OpenVG-only hardware, we'd use OpenVG to render into
those buffer objects and then blit. If there's a 3D core avaliable, we'd use 
the BOs
as textures and do fancy 3D stuff rather than blitting.

Even on devices with a simple linear framebuffer in main memory, doing the blit
in the DRM could be done with a memory-to-memory DMA or some other 
mechenism.

> Not even mentioning that it's pointless 
> for companies that actually care about graphics because they're going to have 
> recent embedded GPU's on theis devices. So if you actually care about 
> accelerating "graphics' than you need to think about current generation of 
> embedded chips and those are programmable.

Not all current embedded chips have programmable GPUs. I don't think we will 
ever 
get to the point where even the low-end chips have programmable hardware.
Programmable 3D graphics IP is very expensive to develop and will always draw 
more
power than simpler hardware.

In 5 years time, when Mali & SGX IP is out-of-date and sold off on the cheap, I 
still
think Marvel/Freescale/TI/Samsung/etc will use in-house developed graphics 
hardware
in their low-end devices. They already have the IP, why add $1 per device just 
to add
a 3D core the customer is never going to use?



Cheers,

Tom

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to