Dear Users!

I encountered with some problem in data reading while I challenged R (and
me too) in a validation point of view.
In this issue, I tried to utilize some reference datasets (
http://www.itl.nist.gov/div898/strd/index.html).
And the result departed a bit from my expectations. This dataset dedicated
to challenge cancellation and accumulation errors (case SmLs07), that's why
this uncommon look of txt file.

Treatment   Response
           1    1000000000000.4
           1    1000000000000.3
           1    1000000000000.5
           ......
           2    1000000000000.2
           2    1000000000000.4
           .....
           3    1000000000000.4
           3    1000000000000.6
           3    1000000000000.4
           .........
then after a read.table() I expect the same set instead I've got this:

    Treatment              Response
1           1 1000000000000.4000244
2           1 1000000000000.3000488
3           1 1000000000000.5000000
.........
22          2 1000000000000.3000488
23          2 1000000000000.1999512
24          2 1000000000000.4000244
.......
58          3 1000000000000.4000244
59          3 1000000000000.5999756
60          3 1000000000000.4000244
61          3 1000000000000.5999756
62          3 1000000000000.4000244
......
a lots of number from the space. I assume that these numbers come from the
binary representation of such a tricky decimal numbers but my question is
how can I avoid this feature of the binary representation?

Moreover, I wondered that it may raise some question in a regulated
environment.

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to