Hi Bernado,

In many financial applications if you convert the dollars and cents to
pennies( ie $1.10 to 110) and divide by 100 at the vary end you can get
maintain higher precision. This applies primarily to sums. This is similar
to keeping track of the decimal fractions which have exact representations
in floating point. It is good to know where the errors come from.

I suppose you could also improve the sum by understanding the decimal to
binary rounding algorithms. I have noticed differences in computations
between Sun hardware(FPUs) and Intel(FPUs). When presented the same set of
operations, it appears that the floating point hardware do not do the
operations in the same order.  

Consider

   sprintf("%a", sum(rep(1.1,100)))     # overestimates the sum
"0x1.b800000000001p+6"
  "0x1.b800000000001p+6"

  sprintf("%a", sum(rep(11,100))/10)   # this gives the correct answer
"0x1.b8p+6"
  110 = 10 1110 = 32 + 8 + 4 + 2 - "0x1.b8p+6" - tricky due to 53 bit and
little endian (I think this is right)
  "0x1.b8p+6"

  note
   cmp <- ifelse (sum(rep(1.1,100))==sum(rep(11,100))/10, "equal",
"unequal")
 
[1] "unequal"

   cmp <- ifelse (sum(rep(1.1,100))>sum(rep(11,100))/10, "greater than",
"less than orequal")
  
[1] "greater than"
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Best-way-to-compute-a-sum-tp2267566p2277489.html
Sent from the R help mailing list archive at Nabble.com.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to