> Benefits:
> 
>  1) simplified kernel API
>  2) reduced user->kernel data transfer (no relocations)
>  3) eliminate user->kernel relocation reformatting
>  4) eliminate buffer objects for relocations
>  5) eliminate buffer object mapping to kernel
> 
> Costs:
> 
>  1) more user mode code 
>  2) might never make progress
> 
> 1) isn't a significant issue -- it should be a whole lot easier to walk
> the relocation list in user space than the current relocation code is in
> kernel mode.
> 
> 2) seems like the big sticking point -- it's easy to imagine a case
> where two clients hammer the hardware and there isn't space for both of
> them. One fairly simple kludge-around would be to 'pin' the buffers in
> place for 'a while' and wait for the client to re-submit the request.
> That would block the conflicting application briefly.

>From my POV this would make doing a proper GPU scheduler a bit harder as 
we wouldn't have all the info in-kernel to schedule operations, we would 
be doing a lot f app blocking..

the bigger problem as I see it is live-lock, 2 apps hammering would stop 
each other from ever going anywhere, which means we would need a lot of 
smarts (possibly said scheduler) to decide which app to get set going and 
which to block.. I can just imagine races if you block the X server while 
moving windows around, as you can't always let the server win otherwise 3D 
apps would never draw, but 3D apps would needs to talk to the X server 
etc.. it just seems to be a larger problem than just handwaving "a while".

Dave.

-------------------------------------------------------------------------
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to