On Wed, Feb 02, 2011 at 03:00:17PM -0800, Ben Widawsky wrote: > Adds the support for actually doing something with buffer validation for > the context when submitted via execbuffer2. When a set of context flags > are submitted in the execbuffer call, the code is able to handle the > commands. It will also go through the existing buffers associated with > the context and make sure they are still present. The big thing missing > is if the client wants to change the domains of buffers which have > already been associated. In this case, the client must disassociate and > then re-associate with the proper domains.
Hm, this remark got me thinking (because currently with read/write domains per reloc domain changes on the fly are super-simple): Why not drop the whole associate/disassociate idea and require userspace to always submit a full list of bos still used by this context (and their required offsets)? Upsides: - No fiddling when reused bos change domains. - No need to track stuff in the kernel, we only check the offsets. - Userspace implementation sounds rather straightforward, too: If the aperture check fails, submit the batchbuffer and then store all bos bound to the current gl context (assuming a one-on-one hw context <-> gl context mapping) and their desired domains (also easy because the usage is known). Then on the next execbuffer use that stored array for the additionally required bos (your flag array). - Improvements like ppgtt or (re-)pinning bos at the right place can still be added incrementally. After all, you already require userspace to be able to do a full context restore as part of your abi ... Downsides: - Might no be optimal. But given our constant fiddling with the batchbuffer submission, expecting to get this right on the first try is probably wishful thinking. Just my 2 (constantly changing) cents on this. Anyway, you've thought much more about this than me, so I expect you to shoot this down in a half-sentence ... ;) Yours, Daniel -- Daniel Vetter Mail: [email protected] Mobile: +41 (0)79 365 57 48 _______________________________________________ Intel-gfx mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/intel-gfx
