Re: [Rd] eigen in beta
All the systems I tried this on give the 'correct' answer, including x86_64 Linux, FC5 (gcc 4.1.1) i686 Linux, FC5 ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) Sparc Solaris, with gcc3, gcc4 and SunPro compilers. Mainly with R 2.5.0 beta, some with R-devel (where the code is unchanged). We have seen problems specific to RHEL's Fortran compilers on x86_64 several times before. I would strongly recommend compiler updates. On Tue, 10 Apr 2007, Peter Dalgaard wrote: Paul Gilbert wrote: Here is the example. Pehaps others could check on other platforms. It is only the first eigenvalue that is different. I am relatively sure the old values are correct, since I compare with an alternate calculation using the expansion of a polynomial determinant. z <- t(matrix(c( 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, -0.14786612501440565826, 0.368411802235074137, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, 0.08575928008564302762, -0.101993668348446601, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, -0.2857482925046247, 0.202241897806646448, 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, -0.09126708346036176145, 0.644249961695308682, 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, 0.16985862929849462888, 0.057282326361118636, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, -0.06175528918915401677, 0.109566197834008949, 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, 0.04921079262665441212, 0.078176878215115805, 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, 0.02009823693764142133, -0.207228935136287512, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, 0.03830466468488327725, 0.224426004034737836, 0, 0, 0, 0, 0, 0, 1, 0, 0, -0.1456426235151105919, 0.28688029213315069388, 0.326933845656016908, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.0164670122082246559, -0.21966261349875662590, 0.036404179329694988, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.0146156940584119890, 0.07505490943478997090, 0.077660578370038813 ), 12, 12)) R-2.5.0 gives > eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8465266+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] 0.6177419+0.000i -0.5604582+0.1958709i -0.5604582-0.1958709i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i R-2.4.1 and many, many previous versions gave > eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] -0.5604582+0.1958709i -0.5604582-0.1958709i 0.5847887+0.000i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i Sys.info() sysname release "Linux""2.4.21-40.ELsmp" version nodename "#1 SMP Thu Feb 2 22:13:55 EST 2006" "mfa04559" machine "x86_64" Paul Gilbert Hmm, I don't get that version$version.string [1] "R version 2.5.0 beta (2007-04-10 r41105)" eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] -0.5604582+0.1958709i -0.5604582-0.1958709i 0.5847887+0.000i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i Sys.info() sysname release "Linux" "2.6.20-1.2933.fc6" version nodename "#1 SMP Mon Mar 19 11:38:26 EDT 2007" "titmouse2.kubism.ku.dk" machine login "i686" "pd" user "pd" And version$version.string [1] "R version 2.5.0 beta (2007-04-09 r41098)" eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] -0.5604582+0.1958709i -0.5604582-0.1958709i 0.5847887+0.000i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i Sys.info() sysname release "Linux" "2.6.18.8-0.1-default" version nodename "#1 SMP Fri Mar 2 13:51:59 UTC 2007" "viggo" machinelogin "x86_64" "pd"
Re: [Rd] eigen in beta
I've also tried on Redhat EL4(x86_64) [but with /usr/local/.. gcc/gfortran 4.1.1] and FC6 (ix86 & x86_64) and Ubuntu dapper (x86_64) and never got the wrong results. > "BDR" == Prof Brian Ripley <[EMAIL PROTECTED]> > on Wed, 11 Apr 2007 07:43:38 +0100 (BST) writes: BDR> All the systems I tried this on give the 'correct' BDR> answer, including x86_64 Linux, FC5 (gcc 4.1.1) i686 BDR> Linux, FC5 ix86 Windows (both gcc 3.4.5 and gcc BDR> pre-4.3.0) Sparc Solaris, with gcc3, gcc4 and SunPro BDR> compilers. BDR> Mainly with R 2.5.0 beta, some with R-devel (where the BDR> code is unchanged). BDR> We have seen problems specific to RHEL's Fortran BDR> compilers on x86_64 several times before. I would BDR> strongly recommend compiler updates. BDR> On Tue, 10 Apr 2007, Peter Dalgaard wrote: >> Paul Gilbert wrote: >>> Here is the example. Pehaps others could check on other >>> platforms. It is only the first eigenvalue that is >>> different. I am relatively sure the old values are >>> correct, since I compare with an alternate calculation >>> using the expansion of a polynomial determinant. >>> >>> >>> z <- t(matrix(c( 0, 0, 0, 0, 0, 0, 0, 0, 0, >>> -0.0064083373167516857, -0.14786612501440565826, >>> 0.368411802235074137, 0, 0, 0, 0, 0, 0, 0, 0, 0, >>> 0.0568624483195125444, 0.08575928008564302762, >>> -0.101993668348446601, 0, 0, 0, 0, 0, 0, 0, 0, 0, >>> 0.0039684327579889069, -0.2857482925046247, >>> 0.202241897806646448, 1, 0, 0, 0, 0, 0, 0, 0, 0, >>> -0.0222834092601282285, -0.09126708346036176145, >>> 0.644249961695308682, 0, 1, 0, 0, 0, 0, 0, 0, 0, >>> -0.0032676036920228878, 0.16985862929849462888, >>> 0.057282326361118636, 0, 0, 1, 0, 0, 0, 0, 0, 0, >>> 0.0148488735227452068, -0.06175528918915401677, >>> 0.109566197834008949, 0, 0, 0, 1, 0, 0, 0, 0, 0, >>> -0.0392756265125193960, 0.04921079262665441212, >>> 0.078176878215115805, 0, 0, 0, 0, 1, 0, 0, 0, 0, >>> -0.001393745191973, 0.02009823693764142133, >>> -0.207228935136287512, 0, 0, 0, 0, 0, 1, 0, 0, 0, >>> 0.0273358858605219357, 0.03830466468488327725, >>> 0.224426004034737836, 0, 0, 0, 0, 0, 0, 1, 0, 0, >>> -0.1456426235151105919, 0.28688029213315069388, >>> 0.326933845656016908, 0, 0, 0, 0, 0, 0, 0, 1, 0, >>> 0.0164670122082246559, -0.21966261349875662590, >>> 0.036404179329694988, 0, 0, 0, 0, 0, 0, 0, 0, 1, >>> 0.0146156940584119890, 0.07505490943478997090, >>> 0.077660578370038813 ), 12, 12)) >>> >>> >>> R-2.5.0 gives > eigen(z, symmetric = FALSE, only.values >>> = TRUE)$values [1] 0.8465266+0.000i >>> -0.0280087+0.6244992i -0.0280087-0.6244992i [4] >>> -0.2908409+0.5522274i -0.2908409-0.5522274i >>> -0.6228929+0.000i [7] 0.6177419+0.000i >>> -0.5604582+0.1958709i -0.5604582-0.1958709i [10] >>> 0.1458799+0.4909300i 0.1458799-0.4909300i >>> 0.3378356+0.000i >>> >>> R-2.4.1 and many, many previous versions gave > eigen(z, >>> symmetric = FALSE, only.values = TRUE)$values [1] >>> 0.8794798+0.000i -0.0280087+0.6244992i >>> -0.0280087-0.6244992i [4] -0.2908409+0.5522274i >>> -0.2908409-0.5522274i -0.6228929+0.000i [7] >>> -0.5604582+0.1958709i -0.5604582-0.1958709i >>> 0.5847887+0.000i [10] 0.1458799+0.4909300i >>> 0.1458799-0.4909300i 0.3378356+0.000i >>> >>> Sys.info() sysname release "Linux" "2.4.21-40.ELsmp" >>> version nodename "#1 SMP Thu Feb 2 22:13:55 EST 2006" >>> "mfa04559" machine "x86_64" >>> >>> Paul Gilbert >> Hmm, I don't get that >> >>> version$version.string >> [1] "R version 2.5.0 beta (2007-04-10 r41105)" >>> eigen(z, symmetric = FALSE, only.values = TRUE)$values >> [1] 0.8794798+0.000i -0.0280087+0.6244992i >> -0.0280087-0.6244992i [4] -0.2908409+0.5522274i >> -0.2908409-0.5522274i -0.6228929+0.000i [7] >> -0.5604582+0.1958709i -0.5604582-0.1958709i >> 0.5847887+0.000i [10] 0.1458799+0.4909300i >> 0.1458799-0.4909300i 0.3378356+0.000i >>> Sys.info() >> sysname release "Linux" "2.6.20-1.2933.fc6" version >> nodename "#1 SMP Mon Mar 19 11:38:26 EDT 2007" >> "titmouse2.kubism.ku.dk" machine login "i686" "pd" user >> "pd" >> >> >> >> And >> >>> version$version.string >> [1] "R version 2.5.0 beta (2007-04-09 r41098)" >>> eigen(z, symmetric = FALSE, only.values = TRUE)$values >> [1] 0.8794798+0.000i -0.0280087+0.6244992i >> -0.0280087-0.6244992i [4] -0.2908409+0.5522274i >> -0.2908409-0.5522274i -0.6228929+0.000i [7] >> -0.5604582+0.1958709i -0.5604582-0.1958709i >> 0.5847887+0.000i [10] 0.1458799+0.4909300i >> 0.1458799-0.4909300i 0.3378356+0.000i >>> Sys.info() >> sysname release "Linux" "2.6.18.8-0.1-default" version
Re: [Rd] is.loaded() and dyn.load()
Your aggressive tone would be inacceptable even if your comments were relevant. What's more they are not. The point here is that under Windows it is very likely that fortran compiled code is not being properly loaded unless g77 is used, and that is.loaded() would trigger the dynamic loading. It is neither a matter of underscores in variable names nor of fixed form f77. see the example below I attach the FORTRAN 77 fixed form source file tryme.f which has been compiled with Compaq Visual Fortran 6.6c3 # df tryme.f -dll -LINK -RELEASE then, under R dyn.load("tryme.dll"); try.me <- function(X){ S <- .Fortran("tryme",as.integer(X),S=as.integer(0))$S # return(S) } try.me(8) Error in .Fortran("tryme", as.integer(X), S = as.integer(0)) : Fortran symbol name "tryme" not in load table is.loaded("tryme") [1] TRUE try.me(8) [1] 9 The problem does not show up if g77 is used: # g77 tryme.f -shared -s -otryme.dll dyn.load("tryme.dll"); try.me(8) [1] 9 The issue does not show up in my OpenSUSE 10.2 box with R patched compiled with gfortran; dyn.load("tryme.dll"); try.me <- function(X){ + S <- .Fortran("tryme",as.integer(X),S=as.integer(0))$S + return(S) + } try.me(8) [1] 9 unlist(R.Version()) platform "x86_64-unknown-linux-gnu" arch "x86_64" os "linux-gnu" system "x86_64, linux-gnu" status "Patched" major "2" minor "4.1" year "2007" month "03" day "20" svn rev "40858" language "R" version.string "R version 2.4.1 Patched (2007-03-20 r40858)" All this regardless of the kind of Fortran language used and independently from the presence of underscores in variable names. Besided this IMHO it might be interesting to discuss in this list the issues related to Fortran 90/95 and the use of different compilers in view of a possible complete support from R also considered that 1. under Linux gfortan is now the default compiler for gcc>4.0.0 2. switching back to F77 might **not** be an option for many people. My intention here was also to provide the community with some feedback on this but it is hard to have a proper discussion in these conditions. Kind regards, Simone Giannerini On 4/5/07, Prof Brian Ripley <[EMAIL PROTECTED]> wrote: Did you read the comments under ?.Fortran about this? What you are doing is quite explicitly said not to be supported. gfortran is not a supported Fortran compiler for R for Windows 2.4.1. It behaves differently from the supported g77. The behaviour is adapted to the compiler used when configure is used, and on Windows that is what the maintainers used, not you are using. If you follow the advice not to use underscores in names you are much less likely to confuse yourself and produce portable code. On Thu, 5 Apr 2007, Simone Giannerini wrote: > Dear all, > > I am puzzled at the behaviour of is.loaded() when a dyn.load() call to a a > FORTRAN shared library is included in a file to be sourced. > A reproducible example is the following: > > 1. the attached fortran subroutine try_it.f90 performs a summation of the > elements of a REAL*8 vector > compile with > > gfortran try_it.f90 -shared -s -otry_it.dll > > 2. create a file to be sourced (see the attached try_it.R) containing the > following commands: > > BEGIN try_it.R > dyn.load("try_it.dll"); > > > try.it <- function(X){ > N <- length(X); > S <- .Fortran("try_it_",as.double(X),as.integer(N),S=as.double(0))$S > return(S) > } > END try_it.R > > > 3. Switch to R > >> source("try_it.R") >> try.it(1:10) > Error in .Fortran("try_it_", as.double(X), as.integer(N), S = as.double (0)) > : > Fortran symbol name "try_it_" not in load table >> is.loaded("try_it_") > [1] TRUE >> try.it(1:10) > [1] 55 >> > it looks like is.loaded() triggers the loading, inserting > is.loaded("try_it_")in > the file try_it.R does the trick but > is this behaviour expected? > > Thank you, > > Regards > > Simone > >> R.version > _ > platform i386-pc-mingw32 > arch i386 > os mingw3
[Rd] package incompatibility under 2.5.0 (p lease respond directly, I am not on r-devel)
Dear all, For my package "ref" I have implemented extensive regression testing. It now fails to compile since primitives "dim" and "dimnames" (and their assignment methods) no longer allow for additional arguments. I was using an additional argument "ref" with several methods. For "].refdata" it still works, with "dim.refdata" no longer. Could you please allow for additional arguments for the following generic functions (or primitives): dim <- function (x, ...) UseMethod("dim") "dim<-" <- function (x, ..., value) UseMethod("dim<-") dimnames <- function (x, ...) UseMethod("dimnames") "dimnames<-" <- function (x, ..., value) UseMethod("dimnames<-") row.names <- function (x, ...) UseMethod("row.names") "row.names<-" <- function (x, ..., value) UseMethod("row.names<-") names <- function (x, ...) UseMethod("names") "names<-" <- function (x, ..., value) UseMethod("names<-") BTW: why does get("dim") returns function (x) .Primitive("dim") and args() works on it, while get("[") returns .Primitive("[") and args() doesn't work on it? Furthermore, until now "rownames", "colnames" have been convenience wrappers for "dimnames". Consequently implementing "dimnames" and "dimnames<-" would indirectly implement "rownames", "colnames" and their assignment methods. This no longer works for classes inheriting from "data.frame" because the assignment methods no longer work via "dimnames<-". I can imagine that this change breaks existing code in other packages as well - without formally throwing errors at package check time (as I said, I have unusually strict regression testing included in the example section, that other packages may not have). If it is really necessary to treat data.frames differently, I'd recommend to change "rownames" and "colnames" accordingly, in order to have symmetry between accessor and assignment functions. That would mean defining "names" and "row.names" and their assignment methods for any classes inheriting from data.frame, instead of "dimnames", correct? Maybe *all* package maintainers should be warned about this or R CMD CHECK should check whether anyone defines "dimnames" or "dimnames<-" for any class inheriting from "data.frame". Best regards Jens Oehlschlägel > -Ursprüngliche Nachricht- > Von: [EMAIL PROTECTED] > Gesendet: 08.04.07 16:50:29 > An: [EMAIL PROTECTED] > CC: [EMAIL PROTECTED],[EMAIL PROTECTED] > Betreff: Package ref_0.92.tar.gz did not pass R CMD check > Dear package maintainer, > > this notification has been generated automatically. > Your package ref_0.92.tar.gz did not pass 'R CMD check' on > Windows and will be omitted from the corresponding CRAN directory > (CRAN/bin/windows/contrib/2.5/). > Please check the attached log-file and consider to resubmit a version > with increased version number that passes R CMD check on Windows. > R version 2.5.0 alpha (2007-04-05 r41063) > > All the best, > Uwe Ligges > (Maintainer of binary packages for Windows) > > > > * using log directory 'd:/Rcompile/CRANpkg/local/2.5/ref.Rcheck' > * using R version 2.5.0 alpha (2007-04-05 r41063) > * checking for file 'ref/DESCRIPTION' ... OK > * this is package 'ref' version '0.92' > * checking package dependencies ... OK > * checking if this is a source package ... OK > * checking whether package 'ref' can be installed ... OK > * checking package directory ... OK > * checking for portable file names ... OK > * checking DESCRIPTION meta-information ... OK > * checking top-level files ... OK > * checking index information ... OK > * checking package subdirectories ... OK > * checking R files for non-ASCII characters ... OK > * checking R files for syntax errors ... OK > * checking whether the package can be loaded ... OK > * checking whether the package can be loaded with stated dependencies ... OK > * checking for unstated dependencies in R code ... OK > * checking S3 generic/method consistency ... WARNING > dim: > function(x) > dim.refdata: > function(x, ref) > > dimnames: > function(x) > dimnames.refdata: > function(x, ref) > > dimnames<-: > function(x, value) > dimnames<-.refdata: > function(x, ref, value) > > See section 'Generic functions and methods' of the 'Writing R Extensions' > manual. > * checking replacement functions ... OK > * checking foreign function calls ... OK > * checking R code for possible problems ... OK > * checking Rd files ... OK > * checking Rd cross-references ... OK > * checking for missing documentation entries ... OK > * checking for code/documentation mismatches ... OK > * checking Rd \usage sections ... OK > * creating ref-Ex.R ... OK > * checking examples ... ERROR > Running examples in 'ref-Ex.R' failed. > The error most likely occurred in: > > > ### * refdata > > > > flush(stderr()); flush(stdout()) > > > > ### Name: refdata > > ### Title: subsettable reference to matrix or data.frame > > ### Aliases: refdata [.refdata [<-.refdata [[.refdata [[<-.refdata > > ### $.refdata $<-.refdata dim.refdata dim<-.refdata di
[Rd] Fortran coding standards
I have some comments on the Fortran code in the fseries package in file 4A-GarchModelling.f , especially the subroutine GARCHFIT and function DSNORM. I appended the code to the end of an earlier message, but it was rejected by some rule. Let me first say that I am grateful that packages for financial econometrics exist in R. Fortran 77 had PARAMETERs, and PARAMETERs equal to 9 and 200 should have been defined instead of repeatedly using "magic numbers". More importantly, the code will fail if NN exceeds 9, but the code does not check for this. I hope someone will fix this. In the code dsged the variables half, one, two should be made parameters, and instead of IMPLICIT DOUBLE PRECISION (A-H, O-Z) IMPLICIT NONE should be used and all variables declared. Although IMPLICIT NONE is not standard Fortran 77, it is standard Fortran 90 and is supported by g77. Experienced Fortranners know that IMPLICIT NONE catches errors. Another defect is the use of specific intrinsic functions such as DSQRT. There is no need to use this, since the SQRT function is generic, handling both single and double precision arguments. Maybe there should R coding standards to address such issues. I hope that eventually the Fortran code in R will use the modern features of Fortran 90 and later standards, using the gfortran compiler. However, with a little effort one can still write clean code in Fortran 77 that also conforms to later standards. Vivek Rao Sucker-punch spam with award-winning protection. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Fortran coding standards
I have some comments on the Fortran code in the fseries package in file 4A-GarchModelling.f , especially the subroutine GARCHFIT and function DSNORM. I appended the code to the end of an earlier message, but it was rejected by some rule. Let me first say that I am grateful that packages for financial econometrics exist in R. Fortran 77 had PARAMETERs, and PARAMETERs equal to 9 and 200 should have been defined instead of repeatedly using "magic numbers". More importantly, the code will fail if NN exceeds 9, but the code does not check for this. I hope someone will fix this. In the code dsged the variables half, one, two should be made parameters, and instead of IMPLICIT DOUBLE PRECISION (A-H, O-Z) IMPLICIT NONE should be used and all variables declared. Although IMPLICIT NONE is not standard Fortran 77, it is standard Fortran 90 and is supported by g77. Experienced Fortranners know that IMPLICIT NONE catches errors. Another defect is the use of specific intrinsic functions such as DSQRT. There is no need to use this, since the SQRT function is generic, handling both single and double precision arguments. Maybe there should R coding standards to address such issues. I hope that eventually the Fortran code in R will use the modern features of Fortran 90 and later standards, using the gfortran compiler. However, with a little effort one can still write clean code in Fortran 77 that also conforms to later standards. Vivek Rao It's here! Your new message! Get new email alerts with the free Yahoo! Toolbar. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
On Wed, 11 Apr 2007, Vivek Rao wrote: > I have some comments on the Fortran code in the > fseries package in file 4A-GarchModelling.f , > especially the subroutine GARCHFIT and function > DSNORM. > I appended the code to the end of an earlier message, > but it was rejected by some rule. Let me first say > that I am grateful that packages for financial > econometrics exist in R. Please pass those on to the maintainer. Note that currently we expect only Fortran 77 to be used in R and (preferably) its packages, as there still are users with a Fortran 77 compiler and nothing later. One major group are those on Windows, and it is hoped that that will move to gcc 4.2.x in 2007. There are still quite a few users on OSes that are two or more years old and for which gcc3 is the norm. > Fortran 77 had PARAMETERs, and PARAMETERs equal to > 9 and 200 should have been defined instead of > repeatedly using "magic numbers". More importantly, > the code will fail if NN exceeds 9, but the code > does not check for this. I hope someone will fix this. > > In the code dsged the variables half, one, two should > be made parameters, and instead of > > IMPLICIT DOUBLE PRECISION (A-H, O-Z) > > IMPLICIT NONE > > should be used and all variables declared. Although > IMPLICIT NONE is not standard Fortran 77, it is > standard Fortran 90 and is supported by g77. > Experienced Fortranners know that IMPLICIT NONE > catches errors. Another defect is the use of specific > intrinsic functions such as DSQRT. There is no need to > use this, since the SQRT function is generic, handling > both single and double precision arguments. > > Maybe there should R coding standards to address such > issues. I hope that eventually the Fortran code in R > will use the modern features of Fortran 90 and later > standards, using the gfortran compiler. However, with > a little effort one can still write clean code in > Fortran 77 that also conforms to later standards. The Fortran code in R itself is (entirely, I think) imported from elsewhere, e.g. from EISPACK or LAPACK or Netlib. We have little interest in changing long-established code, and as the recent thread 'eigen in beta' shows, everytime we update such code someone thinks there is a new bug in R. The developer.r-project.org has a collection of references to encourage 'portable programming', including on Fortran standards. > Vivek Rao -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
I am not familiar with fseries, nor the other routines you mentioned, but can I make a few points: (1) A good part of R's fortran code are "borrowed" from elsewhere. When one is "borrowing", it is customary to keep deviation minimum and strictly necessary i.e. bug-fixes only, and no re-indentation, nor stylistic changes like many of those you outlined. All in all, if you want to push for stylistic changes, you need to approach the original writers. (2) g77, gfortran are not the only fortran compilers out there! "...it is standard Fortran 90 and is supported by g77", is just not good enough - what do you mean by "...also conforms to later standards"? Some piece of code is either fortran 77 conformant, or not (i.e. *at least one* feature used are not). (3) if it is not broken, don't try to fix it. It is possible to debate on stylistic changes for eternity and not getting any real work done. HTL Vivek Rao wrote: > I have some comments on the Fortran code in the > fseries package in file 4A-GarchModelling.f , > especially the subroutine GARCHFIT and function > DSNORM. > I appended the code to the end of an earlier message, > but it was rejected by some rule. Let me first say > that I am grateful that packages for financial > econometrics exist in R. > > Fortran 77 had PARAMETERs, and PARAMETERs equal to > 9 and 200 should have been defined instead of > repeatedly using "magic numbers". More importantly, > the code will fail if NN exceeds 9, but the code > does not check for this. I hope someone will fix this. > > In the code dsged the variables half, one, two should > be made parameters, and instead of > > IMPLICIT DOUBLE PRECISION (A-H, O-Z) > > IMPLICIT NONE > > should be used and all variables declared. Although > IMPLICIT NONE is not standard Fortran 77, it is > standard Fortran 90 and is supported by g77. > Experienced Fortranners know that IMPLICIT NONE > catches errors. Another defect is the use of specific > intrinsic functions such as DSQRT. There is no need to > use this, since the SQRT function is generic, handling > both single and double precision arguments. > > Maybe there should R coding standards to address such > issues. I hope that eventually the Fortran code in R > will use the modern features of Fortran 90 and later > standards, using the gfortran compiler. However, with > a little effort one can still write clean code in > Fortran 77 that also conforms to later standards. > > Vivek Rao > > > > > It's here! Your new message! > Get new email alerts with the free Yahoo! Toolbar. > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
Hi Vivek, > "Vivek" == Vivek Rao <[EMAIL PROTECTED]> > on Wed, 11 Apr 2007 06:20:21 -0700 (PDT) writes: Vivek> I have some comments on the Fortran code in the Vivek> fseries package in file 4A-GarchModelling.f , this is "fSeries" {and case does matter in R - contrary to Fortran :-)} a CRAN-contributed package which has a well-defined maintainer whom you can quickly find using packageDescription("fSeries") Further, he might not read R-devel on a regular basis at all .. Vivek> especially the subroutine GARCHFIT and function Vivek> DSNORM. Vivek> I appended the code to the end of an earlier message, Vivek> but it was rejected by some rule. you (i.e. your mailer) must have used an unspecified binary mime-type; do use "text/plain" instead and the attachments won't be filtered. Vivek> Let me first say that I am grateful that packages for Vivek> financial econometrics exist in R. Vivek> Fortran 77 had PARAMETERs, and PARAMETERs equal to Vivek> 9 and 200 should have been defined instead of Vivek> repeatedly using "magic numbers". More importantly, Vivek> the code will fail if NN exceeds 9, but the code Vivek> does not check for this. I hope someone will fix this. Vivek> In the code dsged the variables half, one, two should Vivek> be made parameters, and instead of Vivek> IMPLICIT DOUBLE PRECISION (A-H, O-Z) Vivek> IMPLICIT NONE Vivek> should be used and all variables declared. Although Vivek> IMPLICIT NONE is not standard Fortran 77, it is Vivek> standard Fortran 90 and is supported by g77. Vivek> Experienced Fortranners know that IMPLICIT NONE Vivek> catches errors. Another defect is the use of specific Vivek> intrinsic functions such as DSQRT. There is no need to Vivek> use this, since the SQRT function is generic, handling Vivek> both single and double precision arguments. Vivek> Maybe there should R coding standards to address such Vivek> issues. I hope that eventually the Fortran code in R Vivek> will use the modern features of Fortran 90 and later Vivek> standards, using the gfortran compiler. However, with Vivek> a little effort one can still write clean code in Vivek> Fortran 77 that also conforms to later standards. I mostly agree with all you say above. But we (R-core) have never wanted to play style-police for the 1000++ CRAN / bioconductor / omegahat R packages that other people provide. As core team we only feel responsible for the code that we distribute with the R distribution, and even there, the so-called recommended ("non-core") packages typically have their own individual maintainer. I'd say, R-core would typically follow your recommendations above, apart from the fact that most of us would not use Fortran for developing new code. And for legacy code that is well tested with many years of history, we'd rarely want to spend the time just for beautifying code (though I occasionally have done so, when wanting to find out what the code was doing at all). Martin Maechler, ETH Zurich __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Sort output of apropos
A further improvement to apropos() would be to sort the output. Currently, the output of apropos is in the order found on the search list and this will rarely be useful to the user. All that is needed is a sort(x) at the end of the function. + seth -- Seth Falcon | Computational Biology | Fred Hutchinson Cancer Research Center http://bioconductor.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] eigen in beta
Hmmm. It is a bit disconcerting that make check passes and I can get a fairly seriously wrong answer. Perhaps this test could be added to make check. Whether or not the one answer is correct may be questionable, but there is no question that prod(eigen(z, symmetric = FALSE, only.values = TRUE)$values ) * prod(eigen(solve(z), symmetric = FALSE, only.values = TRUE)$values ) [1] 1.01677-0i is wrong. (The product of the determinants should equal the determinant of the product, and the determinant of I is 1.) On this machine I am using GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-54)) 3.2.3 20030502 (Red Hat Linux 3.2.3-54), which is a bit old but not really old. (I am using gcc 4.1.1 on my home machine, but the failing machine is suppose to be fairly new and "supported." (There is an OS upgrade planned.) Should I really be thinking of this as an old compiler? Is the compiler the most likely problem or is it possible I have a bad BLAS configuration, or something else? Previous versions of R have compiled without problems on this machine. (I am never very sure where to find all the information to report for a problem like this. Is there a simple way to get all the relevant information?) Paul Gilbert Prof Brian Ripley wrote: > All the systems I tried this on give the 'correct' answer, including > > x86_64 Linux, FC5 (gcc 4.1.1) > i686 Linux, FC5 > ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) > Sparc Solaris, with gcc3, gcc4 and SunPro compilers. > > Mainly with R 2.5.0 beta, some with R-devel (where the code is > unchanged). > > We have seen problems specific to RHEL's Fortran compilers on x86_64 > several times before. I would strongly recommend compiler updates. > > > On Tue, 10 Apr 2007, Peter Dalgaard wrote: > >> Paul Gilbert wrote: >>> Here is the example. Pehaps others could check on other platforms. >>> It is >>> only the first eigenvalue that is different. I am relatively sure the >>> old values are correct, since I compare with an alternate calculation >>> using the expansion of a polynomial determinant. >>> >>> >>> z <- t(matrix(c( >>> 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, >>> -0.14786612501440565826, 0.368411802235074137, >>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, >>> 0.08575928008564302762, -0.101993668348446601, >>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, >>> -0.2857482925046247, 0.202241897806646448, >>> 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, >>> -0.09126708346036176145, 0.644249961695308682, >>> 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, >>> 0.16985862929849462888, 0.057282326361118636, >>> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, >>> -0.06175528918915401677, 0.109566197834008949, >>> 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, >>> 0.04921079262665441212, 0.078176878215115805, >>> 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, >>> 0.02009823693764142133, -0.207228935136287512, >>> 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, >>> 0.03830466468488327725, 0.224426004034737836, >>> 0, 0, 0, 0, 0, 0, 1, 0, 0, -0.1456426235151105919, >>> 0.28688029213315069388, 0.326933845656016908, >>> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.0164670122082246559, >>> -0.21966261349875662590, 0.036404179329694988, >>> 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.0146156940584119890, >>> 0.07505490943478997090, 0.077660578370038813 >>> ), 12, 12)) >>> >>> >>> R-2.5.0 gives >>> > eigen(z, symmetric = FALSE, only.values = TRUE)$values >>> [1] 0.8465266+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i >>> [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i >>> [7] 0.6177419+0.000i -0.5604582+0.1958709i -0.5604582-0.1958709i >>> [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i >>> >>> R-2.4.1 and many, many previous versions gave >>> > eigen(z, symmetric = FALSE, only.values = TRUE)$values >>> [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i >>> [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i >>> [7] -0.5604582+0.1958709i -0.5604582-0.1958709i 0.5847887+0.000i >>> [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i >>> >>> Sys.info() >>> sysname >>> release >>> "Linux" >>> "2.4.21-40.ELsmp" >>> version >>> nodename >>> "#1 SMP Thu Feb 2 22:13:55 EST 2006" >>> "mfa04559" >>> machine >>> "x86_64" >>> >>> Paul Gilbert >> Hmm, I don't get that >> >>> version$version.string >> [1] "R version 2.5.0 beta (2007-04-10 r41105)" >>> eigen(z, symmetric = FALSE, only.values = TRUE)$values >> [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i >> [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i >> [7] -0.
Re: [Rd] eigen in beta
On Wed, 11 Apr 2007, Paul Gilbert wrote: Hmmm. It is a bit disconcerting that make check passes and I can get a fairly seriously wrong answer. Perhaps this test could be added to make check. Well, you have learnt something new about software engineering! 'make check' is supposed to test the operation of R, not compilers. Whether or not the one answer is correct may be questionable, but there is no question that prod(eigen(z, symmetric = FALSE, only.values = TRUE)$values ) * prod(eigen(solve(z), symmetric = FALSE, only.values = TRUE)$values ) [1] 1.01677-0i is wrong. (The product of the determinants should equal the determinant of the product, and the determinant of I is 1.) On this machine I am using GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-54)) 3.2.3 20030502 (Red Hat Linux 3.2.3-54), which is a bit old but not really old. (I am using gcc 4.1.1 on my home machine, but the failing machine is suppose to be fairly new and "supported." (There is an OS upgrade planned.) Should I really be thinking of this as an old compiler? Is the Yes, a known bad compiler, as searching the list archives would have found out. Don't use anything older that gcc 3.4.x on x86_64 Linux. compiler the most likely problem or is it possible I have a bad BLAS configuration, or something else? Previous versions of R have compiled without problems on this machine. (I am never very sure where to find all the information to report for a problem like this. Is there a simple way to get all the relevant information?) Paul Gilbert Prof Brian Ripley wrote: All the systems I tried this on give the 'correct' answer, including x86_64 Linux, FC5 (gcc 4.1.1) i686 Linux, FC5 ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) Sparc Solaris, with gcc3, gcc4 and SunPro compilers. Mainly with R 2.5.0 beta, some with R-devel (where the code is unchanged). We have seen problems specific to RHEL's Fortran compilers on x86_64 several times before. I would strongly recommend compiler updates. On Tue, 10 Apr 2007, Peter Dalgaard wrote: Paul Gilbert wrote: Here is the example. Pehaps others could check on other platforms. It is only the first eigenvalue that is different. I am relatively sure the old values are correct, since I compare with an alternate calculation using the expansion of a polynomial determinant. z <- t(matrix(c( 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, -0.14786612501440565826, 0.368411802235074137, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, 0.08575928008564302762, -0.101993668348446601, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, -0.2857482925046247, 0.202241897806646448, 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, -0.09126708346036176145, 0.644249961695308682, 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, 0.16985862929849462888, 0.057282326361118636, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, -0.06175528918915401677, 0.109566197834008949, 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, 0.04921079262665441212, 0.078176878215115805, 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, 0.02009823693764142133, -0.207228935136287512, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, 0.03830466468488327725, 0.224426004034737836, 0, 0, 0, 0, 0, 0, 1, 0, 0, -0.1456426235151105919, 0.28688029213315069388, 0.326933845656016908, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.0164670122082246559, -0.21966261349875662590, 0.036404179329694988, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.0146156940584119890, 0.07505490943478997090, 0.077660578370038813 ), 12, 12)) R-2.5.0 gives > eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8465266+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] 0.6177419+0.000i -0.5604582+0.1958709i -0.5604582-0.1958709i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i R-2.4.1 and many, many previous versions gave > eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0.6228929+0.000i [7] -0.5604582+0.1958709i -0.5604582-0.1958709i 0.5847887+0.000i [10] 0.1458799+0.4909300i 0.1458799-0.4909300i 0.3378356+0.000i Sys.info() sysname release "Linux""2.4.21-40.ELsmp" version nodename "#1 SMP Thu Feb 2 22:13:55 EST 2006" "mfa04559" machine "x86_64" Paul Gilbert Hmm, I don't get that version$version.string [1] "R version 2.5.0 beta (2007-04-10 r41105)" eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8794798+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5522274i -0.2908409-0.5522274i -0
Re: [Rd] eigen in beta
Prof Brian Ripley wrote: > On Wed, 11 Apr 2007, Paul Gilbert wrote: >> Hmmm. It is a bit disconcerting that make check passes and I can get >> a fairly seriously wrong answer. Perhaps this test could be added to >> make check. > Well, you have learnt something new about software engineering! 'make > check' is supposed to test the operation of R, not compilers. I'm not sure I would call this engineering. Engineers are suppose to worry about the integrity of the whole system. In any case, if it is not make check, then what is the system for checking that you have a good R build? > >> Whether or not the one answer is correct may be questionable, but >> there is no question that >> >> prod(eigen(z, symmetric = FALSE, only.values = TRUE)$values ) * >> prod(eigen(solve(z), symmetric = FALSE, only.values = TRUE)$values ) >> [1] 1.01677-0i >> >> is wrong. (The product of the determinants should equal the >> determinant of the product, and the determinant of I is 1.) >> >> On this machine I am using GNU Fortran (GCC 3.2.3 20030502 (Red Hat >> Linux 3.2.3-54)) 3.2.3 20030502 (Red Hat Linux 3.2.3-54), which is a >> bit old but not really old. (I am using gcc 4.1.1 on my home machine, >> but the failing machine is suppose to be fairly new and "supported." >> (There is an OS upgrade planned.) Should I really be thinking of >> this as an old compiler? Is the > > Yes, a known bad compiler, as searching the list archives would have > found out. Don't use anything older that gcc 3.4.x on x86_64 Linux. Thanks, I'll bug the sys admin. Paul > >> compiler the most likely problem or is it possible I have a bad BLAS >> configuration, or something else? Previous versions of R have >> compiled without problems on this machine. (I am never very sure >> where to find all the information to report for a problem like this. >> Is there a simple way to get all the relevant information?) >> >> Paul Gilbert >> >> Prof Brian Ripley wrote: >>> All the systems I tried this on give the 'correct' answer, including >>> >>> x86_64 Linux, FC5 (gcc 4.1.1) >>> i686 Linux, FC5 >>> ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) >>> Sparc Solaris, with gcc3, gcc4 and SunPro compilers. >>> >>> Mainly with R 2.5.0 beta, some with R-devel (where the code is >>> unchanged). >>> >>> We have seen problems specific to RHEL's Fortran compilers on x86_64 >>> several times before. I would strongly recommend compiler updates. >>> >>> >>> On Tue, 10 Apr 2007, Peter Dalgaard wrote: >>> Paul Gilbert wrote: > Here is the example. Pehaps others could check on other platforms. > It is > only the first eigenvalue that is different. I am relatively sure > the > old values are correct, since I compare with an alternate calculation > using the expansion of a polynomial determinant. > > > z <- t(matrix(c( > 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, > -0.14786612501440565826, 0.368411802235074137, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, > 0.08575928008564302762, -0.101993668348446601, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, > -0.2857482925046247, 0.202241897806646448, > 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, > -0.09126708346036176145, 0.644249961695308682, > 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, > 0.16985862929849462888, 0.057282326361118636, > 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, > -0.06175528918915401677, 0.109566197834008949, > 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, > 0.04921079262665441212, 0.078176878215115805, > 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, > 0.02009823693764142133, -0.207228935136287512, > 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, > 0.03830466468488327725, 0.224426004034737836, > 0, 0, 0, 0, 0, 0, 1, 0, 0, -0.1456426235151105919, > 0.28688029213315069388, 0.326933845656016908, > 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.0164670122082246559, > -0.21966261349875662590, 0.036404179329694988, > 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.0146156940584119890, > 0.07505490943478997090, 0.077660578370038813 > ), 12, 12)) > > > R-2.5.0 gives > > eigen(z, symmetric = FALSE, only.values = TRUE)$values > [1] 0.8465266+0.000i -0.0280087+0.6244992i > -0.0280087-0.6244992i > [4] -0.2908409+0.5522274i -0.2908409-0.5522274i > -0.6228929+0.000i > [7] 0.6177419+0.000i -0.5604582+0.1958709i > -0.5604582-0.1958709i > [10] 0.1458799+0.4909300i 0.1458799-0.4909300i > 0.3378356+0.000i > > R-2.4.1 and many, many previous versions gave > > eigen(z, symmetric = FALSE, only.values = TRUE)$values > [1] 0.8794798+0.000i -0.0280087+0.6244992i > -0.0280087-0.6244992i > [4] -0.2908409+0.5522274i -0.2908409-0.5522274i > -0.6228929+0.000i > [7] -0.5604582+0.1958709i -0.5604582-0.1958709i >
Re: [Rd] eigen in beta
> "PaulG" == Paul Gilbert <[EMAIL PROTECTED]> > on Wed, 11 Apr 2007 10:51:22 -0400 writes: PaulG> Hmmm. It is a bit disconcerting that make check passes and I can get a PaulG> fairly seriously wrong answer. Perhaps this test could be added to make PaulG> check. Yes. If I set EV <- function(x) eigen(x, symmetric = FALSE, only.values = TRUE)$values dDet <- function(x) prod(EV(x)) something like (pp <- dDet(z) * dDet(solve(z))) all.equal(pp, 1 + 0i) Can you see the problem also with e.g., z. <- round(z, 4) or even z. <- round(z * 100) that would make the example code slightly nicer "to the eye" ... PaulG> check. Whether or not the one answer is correct may be questionable, but PaulG> there is no question that PaulG> prod(eigen(z, symmetric = FALSE, only.values = TRUE)$values ) * PaulG> prod(eigen(solve(z), symmetric = FALSE, only.values = TRUE)$values ) PaulG> [1] 1.01677-0i PaulG> is wrong. (The product of the determinants should equal the determinant PaulG> of the product, and the determinant of I is 1.) yes, indeed (included in my code above) PaulG> On this machine I am using GNU Fortran (GCC 3.2.3 20030502 (Red Hat PaulG> Linux 3.2.3-54)) 3.2.3 20030502 (Red Hat Linux 3.2.3-54), which is a PaulG> bit old but not really old. (I am using gcc 4.1.1 on my home machine, PaulG> but the failing machine is suppose to be fairly new and "supported." PaulG> (There is an OS upgrade planned.) I'd say your OS version (using a 2.4 kernel) is quite old indeed. PaulG> Should I really be thinking of this as an old compiler? maybe not old, but buggy? PaulG> Is the compiler the most likely problem or is it PaulG> possible I have a bad BLAS configuration, or PaulG> something else? Previous versions of R have compiled PaulG> without problems on this machine. (I am never very PaulG> sure where to find all the information to report for PaulG> a problem like this. Is there a simple way to get all PaulG> the relevant information?) "simple" yes: the config.log file in your build directory, but that's typically much more than you'd want in a specific case, i.e. contains much more than just the relevant information. Regards, Martin PaulG> Paul Gilbert PaulG> Prof Brian Ripley wrote: >> All the systems I tried this on give the 'correct' answer, including >> >> x86_64 Linux, FC5 (gcc 4.1.1) >> i686 Linux, FC5 >> ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) >> Sparc Solaris, with gcc3, gcc4 and SunPro compilers. >> >> Mainly with R 2.5.0 beta, some with R-devel (where the code is >> unchanged). >> >> We have seen problems specific to RHEL's Fortran compilers on x86_64 >> several times before. I would strongly recommend compiler updates. >> >> >> On Tue, 10 Apr 2007, Peter Dalgaard wrote: >> >>> Paul Gilbert wrote: Here is the example. Pehaps others could check on other platforms. It is only the first eigenvalue that is different. I am relatively sure the old values are correct, since I compare with an alternate calculation using the expansion of a polynomial determinant. z <- t(matrix(c( 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, -0.14786612501440565826, 0.368411802235074137, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, 0.08575928008564302762, -0.101993668348446601, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, -0.2857482925046247, 0.202241897806646448, 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, -0.09126708346036176145, 0.644249961695308682, 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, 0.16985862929849462888, 0.057282326361118636, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, -0.06175528918915401677, 0.109566197834008949, 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, 0.04921079262665441212, 0.078176878215115805, 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, 0.02009823693764142133, -0.207228935136287512, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, 0.03830466468488327725, 0.224426004034737836, 0, 0, 0, 0, 0, 0, 1, 0, 0, -0.1456426235151105919, 0.28688029213315069388, 0.326933845656016908, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0.0164670122082246559, -0.21966261349875662590, 0.036404179329694988, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0.0146156940584119890, 0.07505490943478997090, 0.077660578370038813 ), 12, 12)) R-2.5.0 gives > eigen(z, symmetric = FALSE, only.values = TRUE)$values [1] 0.8465266+0.000i -0.0280087+0.6244992i -0.0280087-0.6244992i [4] -0.2908409+0.5
Re: [Rd] eigen in beta
Paul Gilbert wrote: > Prof Brian Ripley wrote: > >> On Wed, 11 Apr 2007, Paul Gilbert wrote: >> >>> Hmmm. It is a bit disconcerting that make check passes and I can get >>> a fairly seriously wrong answer. Perhaps this test could be added to >>> make check. >>> >> Well, you have learnt something new about software engineering! 'make >> check' is supposed to test the operation of R, not compilers. >> > I'm not sure I would call this engineering. Engineers are suppose to > worry about the integrity of the whole system. In any case, if it is > not make check, then what is the system for checking that you have a > good R build? > Well, there's make check-all... That doesn't check everything either, though. The point is that there is a limit to what we can check: We don't check whether the CPU gets floating point operations wrong in the 5th decimal place on rare occasions either. (Same thing applies to structural engineering: You can't test the chemical composition of every steel rod. At some point you have to shift reponsibility to your subcontractors.) We do have regression checks though, i.e. we try to ensure that known errors do not reappear, and compiler issues have been worked around occasionally. E.g. (IIRC) the original qbeta() function was written so as to get the very last bit of accuracy squeezed out of the solution to a nonlinear equation, but modern optimizing compilers would reorder some instructions, losing a few bits of accuracy and sending the code into an infinite loop in some cases. -pd >>> Whether or not the one answer is correct may be questionable, but >>> there is no question that >>> >>> prod(eigen(z, symmetric = FALSE, only.values = TRUE)$values ) * >>> prod(eigen(solve(z), symmetric = FALSE, only.values = TRUE)$values ) >>> [1] 1.01677-0i >>> >>> is wrong. (The product of the determinants should equal the >>> determinant of the product, and the determinant of I is 1.) >>> >>> On this machine I am using GNU Fortran (GCC 3.2.3 20030502 (Red Hat >>> Linux 3.2.3-54)) 3.2.3 20030502 (Red Hat Linux 3.2.3-54), which is a >>> bit old but not really old. (I am using gcc 4.1.1 on my home machine, >>> but the failing machine is suppose to be fairly new and "supported." >>> (There is an OS upgrade planned.) Should I really be thinking of >>> this as an old compiler? Is the >>> >> Yes, a known bad compiler, as searching the list archives would have >> found out. Don't use anything older that gcc 3.4.x on x86_64 Linux. >> > Thanks, I'll bug the sys admin. > > Paul > >>> compiler the most likely problem or is it possible I have a bad BLAS >>> configuration, or something else? Previous versions of R have >>> compiled without problems on this machine. (I am never very sure >>> where to find all the information to report for a problem like this. >>> Is there a simple way to get all the relevant information?) >>> >>> Paul Gilbert >>> >>> Prof Brian Ripley wrote: >>> All the systems I tried this on give the 'correct' answer, including x86_64 Linux, FC5 (gcc 4.1.1) i686 Linux, FC5 ix86 Windows (both gcc 3.4.5 and gcc pre-4.3.0) Sparc Solaris, with gcc3, gcc4 and SunPro compilers. Mainly with R 2.5.0 beta, some with R-devel (where the code is unchanged). We have seen problems specific to RHEL's Fortran compilers on x86_64 several times before. I would strongly recommend compiler updates. On Tue, 10 Apr 2007, Peter Dalgaard wrote: > Paul Gilbert wrote: > >> Here is the example. Pehaps others could check on other platforms. >> It is >> only the first eigenvalue that is different. I am relatively sure >> the >> old values are correct, since I compare with an alternate calculation >> using the expansion of a polynomial determinant. >> >> >> z <- t(matrix(c( >> 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0064083373167516857, >> -0.14786612501440565826, 0.368411802235074137, >> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0568624483195125444, >> 0.08575928008564302762, -0.101993668348446601, >> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0039684327579889069, >> -0.2857482925046247, 0.202241897806646448, >> 1, 0, 0, 0, 0, 0, 0, 0, 0, -0.0222834092601282285, >> -0.09126708346036176145, 0.644249961695308682, >> 0, 1, 0, 0, 0, 0, 0, 0, 0, -0.0032676036920228878, >> 0.16985862929849462888, 0.057282326361118636, >> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0.0148488735227452068, >> -0.06175528918915401677, 0.109566197834008949, >> 0, 0, 0, 1, 0, 0, 0, 0, 0, -0.0392756265125193960, >> 0.04921079262665441212, 0.078176878215115805, >> 0, 0, 0, 0, 1, 0, 0, 0, 0, -0.001393745191973, >> 0.02009823693764142133, -0.207228935136287512, >> 0, 0, 0, 0, 0, 1, 0, 0, 0, 0.0273358858605219357, >> 0.03830466468488327725, 0.224426004034737836, >>
Re: [Rd] eigen in beta
Peter Dalgaard wrote: > Well, there's make check-all... Ok, this is what I should be running. It reports running tests of LAPACK-based functions make[3]: Entering directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' running code in 'lapack.R' ...make[3]: *** [lapack.Rout] Error 1 make[3]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' make[2]: *** [test-Lapack] Error 2 make[2]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' make[1]: *** [test-all-devel] Error 1 make[1]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' make: *** [check-all] Error 2 ~/toolchain/R/src/R-beta: and lapack.Rout.fail reports > ## failed for some 64bit-Lapack-gcc combinations: > sm <- cbind(1, 3:1, 1:3) > eigenok(sm, eigen(sm)) Error: abs(A %*% V - V %*% diag(lam)) < Eps is not all TRUE Execution halted > That doesn't check everything either, though. The point is that there > is a limit to what we can check: We don't check whether the CPU gets > floating point operations wrong in the 5th decimal place on rare > occasions either. ... Well, I'm talking about 3 to 4% in the maximum eigenvalue, in a problem that I don't think is especially ill-conditioned. There are not many more fundamental calculations in statistics. And all I am asking is that the problem gets reported, I'm not asking for a work around. I clearly need to fix something other than R on the system. I guess it is difficult to know at what level problems should be flagged. I don't think many people run make check-all. Some calculation errors seem bad enough that plain "make" should catch them and not suggest there was a successful build. It would do R's reputation a lot of damage if people used and reported the results of such bad calculations. It might be nice to have an R package (called something like integrityCheck) that runs several numerical checks. This would allow end users to take some responsibility for ensuring their system is built properly. I'm worried about the situation where a sys-admin installs R and does not do any testing. Paul La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] accessing hidden functions in testing code
I'd like to be able to access unexported functions in the testing code for a package with a namespace. I know that one way to do this is to use the ":::" operator, but I'd prefer to avoid cluttering up my testing code with lots of repetitions of "myPackageName:::". One way I found to do this was to attach() the namespace of the package, e.g., using the following: pkg <- "myPackageName" library(package=pkg, character.only=TRUE) ## --- Load the name space to allow testing of private functions --- if (is.element(pkg, loadedNamespaces())) attach(loadNamespace(pkg), name=paste("namespace", pkg, sep=":"), pos=3) Are there any pitfalls that I will find if I continue with this approach? Or is there another better way? -- Tony Plate __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] eigen in beta
Paul Gilbert wrote: > Peter Dalgaard wrote: > > >> Well, there's make check-all... >> > Ok, this is what I should be running. It reports > > running tests of LAPACK-based functions > make[3]: Entering directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' > running code in 'lapack.R' ...make[3]: *** [lapack.Rout] Error 1 > make[3]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' > make[2]: *** [test-Lapack] Error 2 > make[2]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' > make[1]: *** [test-all-devel] Error 1 > make[1]: Leaving directory `/home/mfa/gilp/toolchain/R/src/R-beta/tests' > make: *** [check-all] Error 2 > ~/toolchain/R/src/R-beta: > and lapack.Rout.fail reports > > > ## failed for some 64bit-Lapack-gcc combinations: > > sm <- cbind(1, 3:1, 1:3) > > eigenok(sm, eigen(sm)) > Error: abs(A %*% V - V %*% diag(lam)) < Eps is not all TRUE > Execution halted > >> That doesn't check everything either, though. The point is that there >> is a limit to what we can check: We don't check whether the CPU gets >> floating point operations wrong in the 5th decimal place on rare >> occasions either. ... >> > Well, I'm talking about 3 to 4% in the maximum eigenvalue, in a problem > that I don't think is especially ill-conditioned. You didn't feel the "whoosh" as the reference to the 1994 Pentium FDIV bug went right past you? The magnitude of the error was never the point, rather the point was that we need to assume that most things "just works" -- that log() computes logarithms, that floating point divisions are accurate up to round off, etc., but also that linear algebra libraries do what they are expected to do. We check as much as we can, including comparisons of standard routines with expected output in many cases, but some inconsistencies need a targeted test, and those don't get written without a specific suspicion. > There are not many > more fundamental calculations in statistics. And all I am asking is > that the problem gets reported, I'm not asking for a work around. I > clearly need to fix something other than R on the system. > > Yup. > I guess it is difficult to know at what level problems should be > flagged. I don't think many people run make check-all. Some > calculation errors seem bad enough that plain "make" should catch them > and not suggest there was a successful build. It would do R's > reputation a lot of damage if people used and reported the results of > such bad calculations. > It is a bit like Brian's famous "fortune" entry about writing and reading documentation. We do provide the tools, but developers would go nuts if they had to run make check-all every time they fixed a typo, so it is the testers' obligation to, well, run the tests. One might hope that your example demonstrates to anyone else reading this thread, that relying on others to have done the testing for you can be dangerous, especially if your setup is out of the mainstream. (And as you found out the hard way, the "ultrastable" enterprise editions tend to be just that: Only largish shops install them, and on top of that, they tend to have somewhat dated toolchains.) > It might be nice to have an R package (called something like > integrityCheck) that runs several numerical checks. This would allow end > users to take some responsibility for ensuring their system is built > properly. I'm worried about the situation where a sys-admin installs R > and does not do any testing. > Hmm, a make dependency of "install" on "check-all" is actually possible. Not quite sure how that would be received, though. -pd __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 'R CMD check' fails when suggested package is not available
Prof Brian Ripley wrote: > This is a configurable option 'R_check_force_suggests' documented in > 'Writing R Extensions'. Thanks for pointing this to me. I tried "R-2.5 CMD check --help" but was not very successful with it... > > This package should be using Enhances: Rmpi, it seems. Yep this sounds like what we need here (never heard about this field before, will be the first time a Bioconductor package uses it). Is there any reason why the "Enhances" field is not supported by install.packages? > install.packages("XML", dep="Enhances") Error in as.vector(available[p1, dependencies]) : subscript out of bounds I know this is in sync with the man page for 'install.packages' (which only mentions Depends, Imports and Suggests) but there are for sure people that would like to be able to install a package with _all_ its capabilities (especially those that enhance it) by just doing > install.packages(..., dependencies=TRUE) and not having to look at its DESCRIPTION file in order to figure out what great enhancements there are missing and then install them separately ;-) BTW, the name of this field ("Enhances") and the documentation does not help to understand the "direction" of the enhancing relationship (who enhances who?): both (the name "Enhances" and the 'Writing R Extensions' manual) tend to say that the packages listed in the field are enhanced by the package at hand. That's a point of view. But IMO it's rather the package at hand that is enhanced by the packages listed in the field (at least this is the case with Rmpi). Cheers, H. > > On Wed, 4 Apr 2007, Herve Pages wrote: > >> Hi there, >> >> I was wondering why I get the following error message: >> >> * checking package dependencies ... ERROR >> Packages required but not available: >>Rmpi >> >> when I run 'R CMD check' on a package that _suggests_ Rmpi? >> Why isn't it OK to not have all the suggested packages installed? >> >> Maybe one of the 3 following behaviours would be more appropriate: >> >> a) Having the error saying something like: >> >>Package suggested but not available: >> Rmpi >> >> b) Make this a warning instead of an error. >> >> c) Don't do anything at all for suggested packages. >> >> This issue showed up today while I was checking a new Bioconductor >> package: >> the package suggests Rmpi but the vignette and the examples don't use >> it. If I remove >> Rmpi from the Suggests field then 'R CMD check' runs all the examples >> and re-create >> the vignette with no problem. Most users will not have Rmpi on their >> machine neither >> will they be interested in getting into the trouble of installing it. > > 'Most users' will not be running 'R CMD check', of course. > >> The package I was checking suggests Rmpi only because it contains 1 >> function that tries >> to use it if it's installed but will work perfectly fine otherwise. >> In this case it seems reasonable to have Rmpi in the Suggests field >> but this will >> make 'R CMD check' to fail which is problematic in the context of >> automated builds :-/ >> If 'R CMD check' can't be a little bit more relaxed about this, then I >> guess we will >> need to remove Rmpi from the Suggests field, but then 'R CMD check' >> will complain that: >> >> * checking for unstated dependencies in R code ... WARNING >> 'library' or 'require' calls not declared from: >>Rmpi >> >> which is always better than getting an ERROR. >> >> Thanks! >> >> H. > > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R CMD build fails with try(stop()) in vignette
A vignette in /inst/doc with \documentclass[]{article} \begin{document} <>= try(stop('err')) @ \end{document} produces an error with R CMD build : ... ** building package indices ... * DONE (testPkg) * creating vignettes ... ERROR Error in try(stop("err")) : err This is not seen with Sweave alone. > sessionInfo() R version 2.5.0 beta (2007-04-11 r41127) x86_64-unknown-linux-gnu locale: LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=en_US;LC_COLLATE=en_US;LC_MONETARY=en_US;LC_MESSAGES=en_US;LC_PAPER=en_US;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US;LC_IDENTIFICATION=C attached base packages: [1] "stats" "graphics" "grDevices" "utils" "datasets" "methods" [7] "base" -- Martin Morgan Bioconductor / Computational Biology http://bioconductor.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD build fails with try(stop()) in vignette
It would appear that printing the error message to stderr() is what is causing the build to fail; replace try(stop('err')) with cat('Error in try(stop("err")) : err\n', file = stderr()) and I get the same failure. Best, luke On Wed, 11 Apr 2007, Martin Morgan wrote: > A vignette in /inst/doc with > > \documentclass[]{article} > \begin{document} > <>= > try(stop('err')) > @ > \end{document} > > produces an error with R CMD build : > > ... > ** building package indices ... > * DONE (testPkg) > * creating vignettes ... ERROR > Error in try(stop("err")) : err > > This is not seen with Sweave alone. > >> sessionInfo() > R version 2.5.0 beta (2007-04-11 r41127) > x86_64-unknown-linux-gnu > > locale: > LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=en_US;LC_COLLATE=en_US;LC_MONETARY=en_US;LC_MESSAGES=en_US;LC_PAPER=en_US;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US;LC_IDENTIFICATION=C > > attached base packages: > [1] "stats" "graphics" "grDevices" "utils" "datasets" "methods" > [7] "base" > > > -- Luke Tierney Chair, Statistics and Actuarial Science Ralph E. Wareham Professor of Mathematical Sciences University of Iowa Phone: 319-335-3386 Department of Statistics andFax: 319-335-3017 Actuarial Science 241 Schaeffer Hall email: [EMAIL PROTECTED] Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
--- Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > On Wed, 11 Apr 2007, Vivek Rao wrote: > Note that currently we expect only Fortran 77 to be > used in R and > (preferably) its packages, as there still are users > with a Fortran 77 > compiler and nothing later. One major group are > those on Windows, and it > is hoped that that will move to gcc 4.2.x in 2007. I don't understand this statement. Windows does not come with C or Fortran 77 compilers preinstalled, either. Windows binaries for the free Fortran 95 compilers gfortran and g95 do exist, and I use them daily. > The Fortran code in R itself is (entirely, I think) > imported from > elsewhere, e.g. from EISPACK or LAPACK or Netlib. > We have little interest > in changing long-established code, and as the recent > thread 'eigen in > beta' shows, everytime we update such code someone > thinks there is a new > bug in R. That is understandable, but it also suggests that defects in code in contributed packages should be fixed before it is "published" and people start to rely on it. The main benefit of the R project is the provision of working software to end users, who do not care about the style of the underlying C and Fortran code. A secondary benefit is that it provides a source of code for statistical algorithms that programmers can incorporate in their own programs. This audience will care about the quality of the code. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
--- Hin-Tak Leung <[EMAIL PROTECTED]> wrote: > (2) g77, gfortran are not the only fortran compilers > out there! > "...it is standard Fortran 90 and is supported by > g77", is just not > good enough - what do you mean by "...also conforms > to later standards"? > Some piece of code is either fortran 77 conformant, > or not > (i.e. *at least one* feature used are not). Almost all of Fortran 77 is included in the later standards F90, F95, and F2003, but there are a few exceptions, for example non-integer loop variables. The currently supported and developed Fortran compilers implement Fortran 95. G77 is not being maintained. Therefore I think that people programming in Fortran 77 should use the subset (about 99%) of the language that is present in F95. This can be verified by compiling the code with g95 or gfortran with the std=f95 option. F90 and F95 retain the fixed source form of F77 as an option. > (3) if it is not broken, don't try to fix it. It is > possible to debate > on stylistic changes for eternity and not getting > any real work done. I think modifying code so that it is more robust and readable constitutes real progress, especially for people who will directly with the code. It's here! Your new message! Get new email alerts with the free Yahoo! Toolbar. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran coding standards
On Wed, 11 Apr 2007, Vivek Rao wrote: > > --- Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > >> On Wed, 11 Apr 2007, Vivek Rao wrote: > > > >> Note that currently we expect only Fortran 77 to be used in R and >> (preferably) its packages, as there still are users with a Fortran 77 >> compiler and nothing later. One major group are those on Windows, and >> it is hoped that that will move to gcc 4.2.x in 2007. > > I don't understand this statement. Windows does not > come with C or Fortran 77 compilers preinstalled, > either. Windows binaries for the free Fortran 95 > compilers gfortran and g95 do exist, and I use them > daily. If you mean MinGW, they are private builds of unreleased versions. The actual maintainers do not recommend using them until their patches are incorporated (in gcc 4.2.0, to be released Real Soon Now). I don't know of binaries of C, C++ and Fortran from a single source for gcc4 for MinGW, and you need a consistent set to build R and its packages. (As you will see from the R-admin manual, I do use builds from unreleased versions to test that R will build (and have patched it so it does) in the hope that official builds will be available before 2.6.0 is released.) MinGW builds of gfortran 4.x (or g95) are not compatible with MinGW gcc3 in ways that can lead to incorrect calculations. So there is nothing we can recommend to Windows users of R as yet, and there will not be while the R Windows binaries are built with gcc 3.4.5. > > >> The Fortran code in R itself is (entirely, I think) imported from >> elsewhere, e.g. from EISPACK or LAPACK or Netlib. We have little >> interest in changing long-established code, and as the recent thread >> 'eigen in beta' shows, everytime we update such code someone thinks >> there is a new bug in R. > > That is understandable, but it also suggests that > defects in code in contributed packages should be > fixed > before it is "published" and people start to rely on > it. Which is why we asked you to report such *to the maintainer* and not on R lists. > The main benefit of the R project is the provision of > working software to end users, who do not care about > the style of the underlying C and Fortran code. A > secondary benefit is that it provides a source of code > for statistical algorithms that programmers can > incorporate in their own programs. This audience will > care about the quality of the code. The R project is not responsible for the contributed packages, nor (as I and others have said) for the Fortran code shipped with R. It is not even (strictly) responsible for CRAN. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] data.frame backwards compatibility (PR#9601)
What you are reporting as missing would be *forwards* compatibility, and of an unstated representation (dump? save?) of objects of class "data.frame", not the object data.frame() itself. The correct terminology is 'a data frame', not 'the data.frame' (which means something different). No one has ever said that objects dumped/saved in future versions of R would be valid when restored in a current version. How for example do you expect a data frame containing a raw vector to be read in a version of R from before the raw type was added? You will find many instances of lack of forwards compatibility: it is essential to progress. On Fri, 6 Apr 2007, [EMAIL PROTECTED] wrote: > Full_Name: Victor Moreno > Version: R2.5.0 That is a future version of R, so no one knows how it will behave. > OS: windows > Submission from: (NULL) (68.40.63.169) > > > This may not be a bug, but seems not yet documented. 1) You are asked in the FAQ not to use this address for things you are not 'sure' are bugs. 2) This change is documented under USER-VISIBLE CHANGES, clearly highlighting it. > Some data.frames created with development version 2.5.0, when read in > 2.4.1, show error: What does 'read in' mean? load()? source() from a dump()? > Error in dim.data.frame(chip23) : negative length vectors are not allowed The code you used is not here, so we have no idea what you did to get that error. The FAQ and the posting guide do ask for reproducible code. > A dump of the data.frame shows as last code: > > , row.names = as.integer(c(NA, -7269)), class = "data.frame") > > This seems related to the new feature of data.frames without rownames. Data frames always have row names (sic), by definition, and that dump output does show row names. What is new is that they are allowed to be integers. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-devel