On 2007-11-12 21:29:44 -0500, Geert Bosch wrote: > On Nov 12, 2007, at 12:37, Michael Matz wrote: >> * only double is implemented, hence long double and float are missing at >> least, at least the long double would need some implementation work, >> as you can't simply enlarge the mantissa and hope all your results are >> still correct > Initially, float could simply use double and cast the result. > For double->float the results will remain correctly rounded.
Yes, very probably, but this needs to be proven for each supported function, due to the double rounding problem (this may take some time for functions like pow). > Proving correct rounding involves a lot of testing of problematic > intervals. I'm not sure the work has been done for long double, > or even whether it is feasible to do with the current state of > the art. If by "long double", you mean the x86 extended format, then this is feasible in some domains for unary functions. Now, it is always possible to implement Ziv's strategy up to a sufficiently large precision; in practice, this would probably be correct rounding (without being proven). You'll get good performance in average. Of course, the functions may be slow on the "worst cases", but such worst cases are rare (in practice, I think this can be a problem only with real-time applications, where you need to bound the worst case). >> * many C99 functions are missing: a{sin,cos}h, frexp, ldexp, modf, error >> and gamma, remainder functions, I stoped looking somewhen >> many only partially or slowly implemented: pow, exp2 > Most of these can be easily implemented based on the other primitives. > True, the property of correctly rounded results will be lost, but > that does not throw away the usefulness of having the other functions > be correctly rounded. Worse than losing correct rounding, the accuracy may be quite bad in some domains. >> * relies on a IEEE-754 compatible processor, so it a) needs to have >> floating point at all and b) even has to use correct precision, e.g. a >> problem for x86. That means that either crlibm doesn't work correctly >> when the processor's precision is reset by the user program (e.g. to use >> extended precision), or it has to save/set/restore the state on >> entry/exit of all it's routines > For x86, the use of -mfpmath=sse addresses most, if not all, issues > related to excess precision for float and double. But not all x86 processors support SSE2. However, I suppose you can have crlibm support for some architectures only. > Even if GCC has its own math library, users should still be able to > specify linking with the systems native math library. As -frounding Yes. > In case of the correctly rounded functions, it is incredibly useful > to know that all answers are both deterministic and as accurate as > possible. and consistent. For instance, sin(x)*sin(x) + cos(x)*cos(x) will always be very close to 1, as mathematically expected. >> * slow: it's much faster than other correctly rounded libraries, but >> nevertheless also much slower then more mundane implementations of libm > Do you have any numbers or approximations of how slow? AFAIK, they are fast in average (Florent has some benchmarks). > I think that in practice you'd probably have a number of > implementations for the more popular functions, especially > sin/cos, atan, log/exp. For reasonably good accuracy > of the trigonometric functions (relative error less than > 2 epsilon over entire domain), high-precision argument > reduction is necessary. However, for many applications this > is not an issue, and much more simplistic argument reduction > can be used. Yes, there can be a compile-time option and/or a pragma for that. [...] > It would be great for GCC to at least offer a mode in which the user > can know that the math library gives the most accurate results possible. I completely agree. -- Vincent Lefèvre <[EMAIL PROTECTED]> - Web: <http://www.vinc17.org/> 100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/> Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)