A more important difference is the number of registers available on the CPU, which differs between i386 and x86_64. Hence computations get done in different orders by optimizing compilers.

And yes, all x86_64 CPUs have SSE, so the optimizer uses them at the compiler settings we use.

As Duncan mentioned, the runtime (libc/m, on Windows mainly MSVCRT.dll) differs between OSes.

We know rather a lot about differences between platforms, as recent versions of R contain reference results for almost all the examples, and we from time to time compare output from CRAN check runs across platforms (this was part of the test suite run before releasing the 64-bit Windows port).

Almost all the 64-bit platforms are very close (and agree exactly on the R examples), and 32-bit Solaris and Mac OS X are pretty close, 32-bit Linux has quite a lot of differences, and 32-bit Windows somewhat more.

On Thu, 10 Feb 2011, Petr Savicky wrote:

On Thu, Feb 10, 2011 at 10:37:09PM +1100, Graham Williams wrote:
Should one expect minor numerical differences between 64bit and 32bit R on
Windows? Hunting around the lists I've not been able to find a definitive
answer yet. Seems plausible using different precision arithmetic, but waned
to confirm from those who might know for sure.

One of the sources for the difference between platforms are different
settings of the compiler. On Intel processors, the options may influence,
whether the registers use 80 bit or 64 bit representation of floating
point numbers. In memory, it is always 64 bit. Testing, whether there is
a difference between registers and memory may be done for example using
the code

 #include <stdio.h>
 #define n 3
 int main(int agc, char *argv[])
 {
     double x[n];
     int i;
     for (i=0; i<n; i++) {
         x[i] = 1.0/(i + 5);
     }
     for (i=0; i<n; i++) {
         if (x[i] != 1.0/(i + 5)) {
             printf("difference for %d\n", i);
         }
     }
     return 0;
 }

If the compiler uses SSE arithmetic (-mfpmath=sse), then the output is empty.
If Intel's extended arithmetic is used, then we get

 difference for 0
 difference for 1
 difference for 2

On 32 bit Linuxes, the default was Intel's extended arithmetic, while on
64 bit Linuxes, the default is sometimes SSE. I do not know the situation
on Windows.

Another source of difference is different optimization of expressions.

It is sometimes possible to obtain identical results on different platforms,
however, it cannot be generally guaranteed. For tree construction, even
minor differences in rounding may influence comparisons and this may
result in a different form of the tree.

Petr Savicky.

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


--
Brian D. Ripley,                  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to