On Fri, 08 May 2015 13:45:34 +0100, Pekka Paalanen <[email protected]> wrote:
The self-test uses the following new things that have not been used in Pixman code base before: - fork() - waitpid() - sys/wait.hShould I be adding tests in configure.ac for these?
I don't know the answer to that one, sorry.
Ben, I think you know better what the actual tests for "More accurate COVER_CLIP_NEAREST|BILINEAR" should look like. I hope this implements the fence part properly. Would you like to write the out-of-bounds tests or explain me what they should do?
Without going to a lot more trouble to trap bad accesses at finer than page granularity, I think those fences are the best you can do. The main aim of the test should be to check that when a scaled plot requires source pixels going right up to the edge of the source image coordinate space, that it doesn't read beyond the limits. At present, this is guaranteed because we fall back from a COVER fetcher to a PAD/ REPEAT/NONE/REFLECT fetcher instead (or the equivalent categories of fast paths) before we get all the way there - although this will only be apparent from benchmarking or by interrupting the test in a debugger. To prove the validity of the patch in question, we need to be able to demonstrate that we can safely continue to use a COVER routine all the way to the edge of the source. Or at least, if we have to compromise and set a tighter threshold before switching to a more general purpose routine, that it is still safe to do so. What we need to test: * Both nearest-neighbour and bilinear scaling. * A range of scale factors, both reduction and enlargement. I remember that I used a variety of routines for difference scales of reduction (1..2x, 2..3x, 3..4x and so on) to avoid having to hold the integer part of the increment in a register, so we need to test up to at least 8x reduction. * We need to test alignment with both the extreme minimum and extreme maximum source coordinates permitted. Given that the scale factors can't be assumed to be nice numbers, I think we need to test them independently, using the translation offsets in the transformation matrix to ensure they align as desired. * Possibly test "near miss" cases as well, though these would probably want to be weighted so that positions at or closer to the limit are checked most. * Test both horizontal and vertical axes. * We should survey which Porter-Duff operations have scaled fast paths (across all platforms) and make sure we test each of those, plus at least one other which will guarantee that we're exercising the fetchers - XOR is probably safe for this. For nearest-neighbour scaling, the acceptable criteria should be: * Low end: centre of first output pixel coincides with source coordinate pixman_fixed_e * High end: centre of last pixel output coincides with source coordinate pixman_fixed_1*width For bilinear scaling: * Low end: centre of first output pixel coincides with source coordinate pixman_fixed_1/2 (i.e. centre of first source pixel) * High end: I'm slightly torn on this one. You could say pixman_fixed_1*width - pixman_fixed_1/2 But it's also true that BILINEAR_INTERPOLATION_BITS comes into play, meaning that some slightly higher coordinates should also be achievable without requiring any input from the out-of-bounds pixel. For BILINEAR_INTERPOLATION_BITS=7, anything up to and including 0x0.01FF higher than this should be indistinguishable. And yes, that does mean that there is a very slight bias because coordinates are truncated by 9 bits, rounding towards zero. Having had to carefully think through writing the specification above, I have some idea how I'd go about writing it - but if you can follow that description, feel free to have a go yourself! Ben _______________________________________________ Pixman mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/pixman
