> > Answering with a few examples: > x=1.000000000000000000e-15 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=5.000000000000001000e-15 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=2.500000000000000400e-14 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=1.250000000000000200e-13 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=6.250000000000001000e-13 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=3.125000000000000500e-12 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=1.562500000000000400e-11 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=7.812500000000003000e-11 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=3.906250000000001400e-10 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=1.953125000000000700e-09 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=9.765625000000004000e-09 f=1.000000000000000000e+00 > s=1.000000000000000000e+00 > x=4.882812500000002000e-08 f=9.999999999999996000e-01 > s=9.999999999999996000e-01 > x=2.441406250000001000e-07 f=9.999999999999900000e-01 > s=9.999999999999900000e-01 > x=1.220703125000000700e-06 f=9.999999999997516000e-01 > s=9.999999999997516000e-01 > x=6.103515625000004000e-06 f=9.999999999937912000e-01 > s=9.999999999937912000e-01 > x=3.051757812500002000e-05 f=9.999999998447796000e-01 > s=9.999999998447796000e-01 > x=1.525878906250001000e-04 f=9.999999961194893000e-01 > s=9.999999961194893000e-01 > x=7.629394531250005000e-04 f=9.999999029872346000e-01 > s=9.999999029872346000e-01 > x=3.814697265625002600e-03 f=9.999975746825600000e-01 > s=9.999975746825600000e-01 > x=1.907348632812501400e-02 f=9.999393681227797000e-01 > s=9.999393681227797000e-01 > > Thus: below 9.765625000000004E-9, the value of the definitional formula > (without shortcut, indicated by "f=" in the above table) is strictly the > same as the CM implementation (with shortcut, indicated by "s=" in the above > table) in that they are both the double value "1.0". > > [I still don't understand what you mean by "(despite all being equal to 1 > under double equals)".] > > What the implementation returns is, within double precision, the result of > the mathematical definition of sinc: sin(x) / x. Hence, there is no *need* > to document any special case, not even x = 0: The fact that without the > shortcut check, we'd get NaN does not mean that the implementation departs > from the definition, but rather that it correctly simulates it (at double > precision). > [However, if we assume that the doc should integrate numerical > considerations, it doesn't hurt to add the extra prose (in which case it > becomes necessary, IMHO, to explain the "1e-9").] > > Maybe I should also add a "SincTest" class where a unit test would check the > strict equality of returned values from the actual division and from the > shortcut implementation? > > > Gilles > I think the 1E-9 value is actually a mathematically provable value, since sin(x)/x is an alternating series, so the difference |sin(x)/x-1| is bounded from above by the next term in the taylor expansion, namely x^2/6. Replacing sinc(x) by one is therefore rigorous provided this error remains below one ulp of one. This leads to the following condition x^2/6 < 1E-16, which is actually less strong than |x| < 1E-9. So I personally think that this is indeed an implementation detail. If you look at the implementation of any basic functions, there are *always* boundary cases, with different formulas for different ranges of the argument. These cases are not advertised anywhere in the doc (and we should be thankful of that, in my opinion). As a user of sinc (and spherical Bessel functions in general, for diffraction problems), the only thing I really care about is: reliability near zero. How this reliability is enforced is really none of my concerns. One further point. If you were to try and implement the next spherical Bessel function, you would find that the analytical floating-point (without short cut, using Gilles' terminology) estimate near zero does *not* behave well, because, I think, of catastrophic cancellations. So in this case, you *have* to carry out a test on x, and if x is too small, you have to replace the analytical expression of j1 by its Taylor expansion, in order to remain within one ulp of the expected value. The boundary is found using the preceding line of reasoning (alternating series). In this case, this is still an implementation detail, but *not* an optimization one-- it is a *requirement*!
Sébastien --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org