Aaron Plattner <[email protected]> writes: > As for ops, the nvidia driver plugs in a different set depending on the > results of ValidateGC.
Yeah, most drivers did that a long time ago, but I can't believe you'd even be able to measure this these days. > I'm sure this was a big win when it was written. I can try to collect > some numbers on a slow CPU system where it's most likely to make a > difference. I doubt it was a big win even 1988, just one of those things that were done because it seemed like the right optimization. I don't know of any measurements done at the time which showed that this was necessary. I do recall making a few simple measurements when I did the initial fb implementation that showed no impact on performance from checking the GC values at each request rather than only at validate time. Any application which cared about core X performance would be careful to batch up as many objects as possible into a single request, at which point the overhead of a couple of compares would be completely lost. You might go look at the ValidateGC code and see how many tests are in the most complicated path. And, don't forget that by eliminating the code from ValidateGC, you're avoiding doing things like line width comparisons for GCs which will only ever be used for solid fills or blts. Btw, you should have seen the original VAX GPX driver -- iirc it had magic computations on GC values that indexed arrays full of pointers to various optimized rendering functions. Fixing GC validation in that driver was no picnic... -- [email protected]
pgpV3WlmtVzrN.pgp
Description: PGP signature
_______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
