On 2023-04-20 11:52, Jakub Jelinek wrote:
Why? Unless an implementation guarantees <= 0.5ulps errors, it can be one or more ulps off, why is an error at or near 1.0 or -1.0 error any worse than similar errors for other values?
In a general sense, maybe not, but in the sense of breaching the bounds of admissible values, especially when it can be reasonably corrected, it seems worse IMO to let the error slide.
Similarly for other functions which have other ranges, perhaps not with so nice round numbers. Say asin has [-pi/2, pi/2] range, those numbers aren't exactly representable, but is it any worse to round those values to -inf or +inf or worse give something 1-5 ulps further from that interval comparing to other 1-5ulps errors?
I agree the argument in favour of allowing errors breaching the mathematical bounds is certainly stronger for bounds that are not exactly representable. I just feel like the implementation ought to take the additional effort when the bounds are representable and make it easier for the compiler.
For bounds that aren't representable, one could get error bounds from libm-test-ulps data in glibc, although I reckon those won't be exhaustive. From a quick peek at the sin/cos data, the arc target seems to be among the worst performers at about 7ulps, although if you include the complex routines we get close to 13 ulps. The very worst imprecision among all math routines (that's gamma) is at 16 ulps for power in glibc tests, so maybe allowing about 25-30 ulps error in bounds might work across the board.
But yeah, it's probably going to be guesswork. Thanks, Sid