Hello,

I was looking at the recent FTBFS of libgd2, which prevented security fixes to reach debian archive for more than a week. The FTBFS were restricted to several architectures.

By the look of it, it seems that the errors are simple arithmetical inaccuracies, when the tests expect pixel-exact results. I was specifically concerned about gdimagerotate/bug00067 test on i386, and the result of the rotate operation, while not comparing equal to the expected image, seemed the same to the naked eye.

Slight differences of the computations on different architectures are to be expected, eg. if those architectures use different floating point formats, although it shouldn't matter that much in the test I mentioned (by rough estimate it should need a precision of about 1/2^18 -- 1/2^20, while IEE754 float is more precise than that). However, I was surprised that when I tested it with optimizations turned off, there were failures in the test suite too, but _different_ failures. This should mean there's something dodgy going on either in gcc or in the code.

Anyway, I guess libgd2's aim isn't to provide pixel perfect image manipulations, but rather accessible image functions for eg. web servers in PHP. In that case, the testsuite doesn't really reflect the requirements it should fulfill, and it should focus more on security than accuracy.

I would propose to ditch the testsuite completely from the building process of the package, since in its present state, it is inherently unreliable and would cause FTBFS. Instead, an autopkgtest testsuite could be made (with the running the same tests), which could be automatically ran using ci.debian.org. Such a testsuite could probably even be rigged to run under valgrind, which could catch some memory errors. At the same time, the testsuite could be made more lenient (or the library code more accurate), but that would require substantially more work and I don't know whether it would be desirable.

Please let me know what you think.

Regards

    Jiri Palecek

Reply via email to