Dear R developers,
By default, R uses the "long double" data type to get extra precision for
intermediate computations, with a small performance tradeoff.
Unfortunately, on all Intel x86 computers I have ever seen, long doubles
(implemented in the x87 FPU) are extremely slow whenever a special
representation (NA, NaN or infinities) is used; probably because it triggers
poorly optimized microcode in the CPU firmware. A function such as sum()
becomes more than hundred times slower!
Test code:
a=runif(1e7);system.time(for(i in 1:100)sum(a))
b=a;b[1]=NA;system.time(sum(b))
The slowdown factors are as follows on a few intel CPU:
1) Pentium Gold G5400 (Coffee Lake, 8th generation) with R 64 bits : 140
times slower with NA
2) Pentium G4400 (Skylake, 6th generation) with R 64 bits : 150 times
slower with NA
3) Pentium G3220 (Haswell, 4th generation) with R 64 bits : 130 times
slower with NA
4) Celeron J1900 (Atom Silvermont) with R 64 bits : 45 times slower with NA
I do not have access to more recent Intel CPUs, but I doubt that it has
improved much.
Recent AMD CPUs have no significant slowdown.
There is no significant slowdown on Intel CPUs (more recent than Sandy Bridge)
for 64 bits floating point calculations based on SSE2. Therefore, operators
using doubles, such as '+' are unaffected.
I do not know whether recent ARM CPUs have slowdowns on FP64... Maybe somebody
can test.
Since NAs are not rare in real-life, I think that it would worth an extra check
in functions based on long doubles, such as sum(). The check for special
representations do not necessarily have to be done at each iteration for
cumulative functions.
If you are interested, I can write a bunch of patches to fix the main functions
using long doubles: cumsum, cumprod, sum, prod, rowSums, colSums, matrix
multiplication (matprod="internal").
What do you think of that?
--
Sincerely
Andr� GILLIBERT
[[alternative HTML version deleted]]
______________________________________________
[email protected] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel