John C Nash wrote:
This actually goes back a very long way. Peter is right to remind us that "optimizers" (in the sense of compilers) can corrupt algorithms that are well-designed. Optimizing in tests is something some of us have fought for nearly 40 years, but compiler writers don't do much floating-point computation -- they sometimes save a few microseconds of computer time for many days and weeks of human time designing around such silliness. It is worse than a "bug" in that the code to work around the optimization can be very complicated and arcane -- and may not port across architectures. Clearly a program should execute as the code instructs, and not as a compiler designer decides to reinterpret it. Nevertheless, with a fairly large offset of 16.0 or 256.0 I have never seen such a test fail -- and it can port across different precision and different radix arithmetics. However, I must say I tend to prefer to turn off compile optimization and try to keep my code clean i.e, manually optimized if possible. I also save the two sides of the test to separate variables, though I can guess that some compilers would corrupt that pretty easily.

Yes. I might not have been quite fair to the offset method, the qbeta issue was slightly different. (The quantity

tx = xinbta - adj;

was assumed to become equal to xintbta in the original implementation, and didn't.

I'm pretty sure the current declaration of xinbta as "volatile" is a leftover from a time where the compiler had put xinbta in an 80 bit register whereas tx was 64 bit. The 1998 version in SVN also had a static volatile xtrunc used to force tx to 64 bit, but I seem to recall that we put that there, so presumably there was an even earlier version.)

Note that historically this issue didn't upset early codes where optimization was manual, then became an issue to watch out for on each implementation when we computed locally, and now could be a nasty but hidden trap with grid and cloud computing when algorithms may be running on a number of different systems, and possibly controlling critical systems.

JN


Peter Dalgaard wrote:
[EMAIL PROTECTED] wrote:
This comment is orthogonal to most of the others. It seems that folk often
want to test for equality of "real" numbers. One important one is for
convergence tests. When writing my Compact Numerical Methods book I had to avoid lots of logical tests, but wanted to compare two REALs. I found that
the following approach, possibly considered a trick, is to use an offset
and compare

    xnew + offset

to

    xold + offset

This works on the examples given to motivate the current thread with an
offset of 10, for example.

Motivation: Small xold, xnew  compare offset with itself. Large xold and
xnew are compared bitwise. Essentially we change from using a tolerance to
using 1/tolerance.

Perfect? No. But usable? Yes. And I believe worth keeping in mind for
those annoying occasions where one needs to do a comparison but wants to
get round the issue of knowing the machine precision etc.

Hmm. Echos of some early battles with R's qbeta() in this. I don't think it can be recommended.

The problem is that you can end up in a situation where xnew=xold-1ulp and xnewnew is xnew+1ulp. I.e. in two iterations you're back at xold.

Even in cases where this provably cannot happen, modern optimizers may make it happen anyway...



--
   O__  ---- Peter Dalgaard             Ă˜ster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark      Ph:  (+45) 35327918
~~~~~~~~~~ - ([EMAIL PROTECTED])              FAX: (+45) 35327907

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to