Re: [Rd] 32 vs 64 bit difference?
On Nov 26, 2011, at 05:20 , Terry Therneau wrote: > I've spent the last few hours baffled by a test suite inconsistency. > > The exact same library code gives slightly different answers on the home > and work machines - found in my R CMD check run. I've recopied the entire > directory to make sure it's really identical code. > The data set and fit in question has a pretty flat "top" to the likelihood. > I put print statements in to the "f()" function called by optim, and the > two parameters and the likelihood track perfectly for 48 iterations, then > start to drift ever so slightly: > < theta= -3.254176 -6.201119 ilik= -16.64806 >> theta= -3.254176 -6.201118 ilik= -16.64806 > > And at the end of the iteration: > < theta= -3.207488 -8.583329 ilik= -16.70139 >> theta= -3.207488 -8.58 ilik= -16.70139 > > As you can see, they get to the same max, but with just a slightly > different path. > > The work machine is running 64 bit Unix (CentOS) and the home one 32 bit > Ubuntu. > Could this be enough to cause the difference? Most of my tests are > based on all.equal, but I also print out 1 or 2 full solutions; perhaps > I'll have to modify that? We do see quite a lot of that, yes; even running 32 and 64 bit builds on the same machine, an sometimes to the extent that an algorithm diverges on one architecture and diverges on the other (just peek over on R-sig-ME). The comparisons by "make check" on R itself also give off quite a bit of "last decimal chatter" when the architecture is switched. For some reason, OSX builds seem more consistent than Windows and Linux, although I have only anecdotal evidence of that. However, the basic point is that compilers don't define the sequence of FPU operations down to the last detail, an internal extended-precision register may or may not be used, the order of terms in a sum may be changed, etc. Since 64 bit code has different performance characteristics from 32 bit code (since you shift more data around for pointers), the FPU instructions may be differently optimized too. > > Terry Therneau > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd@cbs.dk Priv: pda...@gmail.com __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 32 vs 64 bit difference?
On 26/11/2011 09:23, peter dalgaard wrote: On Nov 26, 2011, at 05:20 , Terry Therneau wrote: I've spent the last few hours baffled by a test suite inconsistency. The exact same library code gives slightly different answers on the home and work machines - found in my R CMD check run. I've recopied the entire directory to make sure it's really identical code. The data set and fit in question has a pretty flat "top" to the likelihood. I put print statements in to the "f()" function called by optim, and the two parameters and the likelihood track perfectly for 48 iterations, then start to drift ever so slightly: < theta= -3.254176 -6.201119 ilik= -16.64806 theta= -3.254176 -6.201118 ilik= -16.64806 And at the end of the iteration: < theta= -3.207488 -8.583329 ilik= -16.70139 theta= -3.207488 -8.58 ilik= -16.70139 As you can see, they get to the same max, but with just a slightly different path. The work machine is running 64 bit Unix (CentOS) and the home one 32 bit Ubuntu. Could this be enough to cause the difference? Most of my tests are based on all.equal, but I also print out 1 or 2 full solutions; perhaps I'll have to modify that? We do see quite a lot of that, yes; even running 32 and 64 bit builds on the same machine, an sometimes to the extent that an algorithm diverges on one architecture and diverges on the other (just peek over on R-sig-ME). The comparisons by "make check" on R itself also give off quite a bit of "last decimal chatter" when the architecture is switched. For some reason, OSX builds seem more consistent than Windows and Linux, although I have only anecdotal evidence of that. However, the basic point is that compilers don't define the sequence of FPU operations down to the last detail, an internal extended-precision register may or may not be used, the order of terms in a sum may be changed, etc. Since 64 bit code has different performance characteristics from 32 bit code (since you shift more data around for pointers), the FPU instructions may be differently optimized too. However, the main difference is that all x86_64 chips have SSE2 registers, and so gcc makes use of them. Not all i686 chips do, so 32-bit builds on Linux and Windows only use the FPU registers. This matters at ABI level: arguments get passed and values returned in SSE registers: so we can't decide to only support later i686 cpus and make use of SSE2 without re-compiling all the system libraries (but a Linux distributor could). And the FPU registers are 80-bit and use extended precision (the way we set up Windows and on every Linux system I have seen): the SSE* registers are 2x64-bit. I believe that all Intel Macs are 'Core' or later and so do have SSE2, although I don't know how much Apple relies on that. (The reason I know that this is the 'main difference' is that you can often turn off the use of SSE2 on x86_64 and reproduce the i686 results. But because of the ABI differences, you may get crashes: in R this matters most often for complex numbers which are 128-bit C99 double complex and passed around in an SSE register.) Terry Therneau __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel -- Brian D. Ripley, rip...@stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Case: package removed from CRAN, but not orphaned
Joris Meys gmail.com> writes: > > I agree completely with Uwe on this one. Yet, the idea of Rainer is > useful if you replace "remove the package" by "orphan the package". > Some sort of automated orphanization. The package remains available > that way if I understood it right, and can more easily be adopted by > another developer that feels responsible. It might also make the > manual cleanup (i.e. detecting poorly maintained packages without a > responsive developer) a bit easier. After all, clicking a link once > every so often to indicate you're still following the package isn't > too much work for a package developer, and it could help the CRAN > maintainers. Or am I completely off here? > Just a tiny update: Thanks to the great new "packdep" package, it's very easy to find out how many of the packages on CRAN have *no* reverse dependencies: library(packdep) d1 <- map.depends() c <- dependencies(d1) sum(c$reverse==0)/nrow(c) 66%. Furthermore, I would guess that orphaned packages would be more likely to be in this 66%. What about exempting packages with any reverse dependencies from the auto-orphanization process? Ben Bolker __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problem with & question about \preformatted in .Rd
I need to correct one minor typo below: On Fri, Nov 25, 2011 at 6:12 PM, Gray Calhoun wrote: (cut a lot) > I expected from the documentation to get this: > > \inputencoding{utf8} > \HeaderA{test}{test}{test} > % > \begin{Section}{problems} > \begin{alltt}print('\bsl{}\bsl{}\bsl{}\begin\bsl{}\bsl{}\{block\bsl{}\bsl{}\}')\end{alltt} > \end{Section} The second to last line should read: \begin{alltt}print('\bsl{}\bsl{}\bsl{}\bsl{}begin\bsl{}\bsl{}\{block\bsl{}\bsl{}\}')\end{alltt} Sorry about that. --Gray -- Gray Calhoun Assistant Professor of Economics, Iowa State University http://www.econ.iastate.edu/~gcalhoun __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel