[Rd] make check failure: lapack.Rout.fail
I see that the comment in the lapack test indicates that developers are aware of this issue. Are there any known fixes to this problem? compiler flags, etc. an upgrade to a more recent gcc is not an option for me. this occurred while doing make check-all on R-patched_2007-02-11.tar.gz with the following: RHEL3 on x86_64 gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-47) GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-47)) 3.2.3 20030502 (Red Hat Linux 3.2.3-47) code snip: > ## failed for some 64bit-Lapack-gcc combinations: > sm <- cbind(1, 3:1, 1:3) > eigenok(sm, eigen(sm)) Error: abs(A %*% V - V %*% diag(lam)) < Eps is not all TRUE Execution halted Thanks in advance, Whit This e-mail message is intended only for the named recipient(s) above. It may contain confidential information. If you are not the intended recipient you are hereby notified that any dissemination, distribution or copying of this e-mail and any attachment(s) is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender by replying to this e-mail and delete the message and any attachment(s) from your system. Thank you. [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] make check failure: lapack.Rout.fail
On Wed, 14 Feb 2007, Armstrong, Whit wrote: > I see that the comment in the lapack test indicates that developers are > aware of this issue. > > Are there any known fixes to this problem? compiler flags, etc. > > an upgrade to a more recent gcc is not an option for me. Well, R-devel has an updated LAPACK so you could try that. But the real problem is the 2003 compiler, from well before x86_64 Linux was stable and the known fix is to use a less ancient g77. > > this occurred while doing make check-all on R-patched_2007-02-11.tar.gz > > with the following: > > RHEL3 on x86_64 > gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-47) > GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-47)) 3.2.3 20030502 > (Red Hat Linux 3.2.3-47) > > code snip: > >> ## failed for some 64bit-Lapack-gcc combinations: >> sm <- cbind(1, 3:1, 1:3) >> eigenok(sm, eigen(sm)) > Error: abs(A %*% V - V %*% diag(lam)) < Eps is not all TRUE > Execution halted > > > Thanks in advance, > Whit > > > > > > This e-mail message is intended only for the named recipient(s) above. It may > contain confidential information. If you are not the intended recipient you > are hereby notified that any dissemination, distribution or copying of this > e-mail and any attachment(s) is strictly prohibited. If you have received > this e-mail in error, please immediately notify the sender by replying to > this e-mail and delete the message and any attachment(s) from your system. > Thank you. > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in partial matching of attribute names
Prof Brian Ripley wrote: > On Tue, 13 Feb 2007, Tony Plate wrote: > >> Ok, thanks for the news of a fix in progress. > > > BTW, your suggested fix is incorrect. Consider having an exact match > after two partial matches in the list of attributes. oops. yes. > >> On the topic of the "names" attribute being treated specially, I >> wonder if the do_attr() function might treat it a little too >> specially. As far as I can tell, the loop in the first large block >> code in do_attr() (attrib.c), which begins >> >>/* try to find a match among the attributes list */ >>for (alist = ATTRIB(s); alist != R_NilValue; alist = CDR(alist)) { >> >> will find a full or partial match for a "names" attribute (at least >> for ordinary lists and vectors). >> >> Then the large block of code after that, beginning: >> >>/* unless a full match has been found, check for a "names" >> attribute */ >>if (match != FULL && ! strncmp(CHAR(PRINTNAME(R_NamesSymbol)), str, >> n)) { >> >> seems unnecessary because a names attribute has already been checked >> for. In the case of a partial match on the "names" attribute this code >> will behave as though there is an ambiguous partial match, and >> (incorrectly) return Nil. Is this second block of code specific to >> the "names" attribute possibly a hangover from an earlier day when the >> first loop didn't detect a "names" attribute? Or am I missing >> something? Are there some other objects for which the first loop >> doesn't include a "names" attribute? > > > Yes: I pointed you at the 'R internals' manual, but this is also on the > help page. 1D arrays and pairlists have names stored elsewhere. It > needs to be changed to be > > - else if (match == PARTIAL) { > + else if (match == PARTIAL && strcmp(CHAR(PRINTNAME(tag)), > "names")) { Wouldn't it be equivalent (but clearer & a minute fraction more computationally efficient) to put that test 'strcmp(CHAR(PRINTNAME(tag)), "names")' right at the start of that block, i.e., as an additional condition in if (match != FULL && ! strncmp(CHAR(PRINTNAME(R_NamesSymbol)), str, n)) { ? -- Tony Plate > > >> -- Tony Plate >> >> Prof Brian Ripley wrote: >> >>> It happens that I was looking at this yesterday (in connection with >>> encodings on CHARSXPs) and have a fix in testing across CRAN right now. >>> >>> As for "names", as you will know from reading 'R Internals' the names >>> can be stored in more than one place, which is why it has to be >>> treated specially. >>> >>> On Mon, 12 Feb 2007, Tony Plate wrote: >>> There looks to be a bug in do_attr() (src/main/attrib.c): incorrect partial matches of attribute names can be returned when there are an odd number of partial matches. E.g.: > x <- c(a=1,b=2) > attr(x, "abcdef") <- 99 > attr(x, "ab") [1] 99 > attr(x, "abc") <- 100 > attr(x, "ab") # correctly returns NULL because of ambig partial match NULL > attr(x, "abcd") <- 101 > attr(x, "ab") # incorrectly returns non-NULL for ambig partial match [1] 101 > names(attributes(x)) [1] "names" "abcdef" "abc""abcd" > The problem in do_attr() looks to be that after match is set to PARTIAL2, it can be set back to PARTIAL again. I think a simple fix is to add a "break" in this block in do_attr(): else if (match == PARTIAL) { /* this match is partial and we already have a partial match, so the query is ambiguous and we return R_NilValue */ match = PARTIAL2; break; /* < ADD BREAK HERE */ } else { However, if this is indeed a bug, would this be a good opportunity to get rid of partial matching on attribute names -- it was broken anyway -- so toss it out? :-) Does anyone depend on partial matching for attribute names? My view is that it's one of those things like partial matching of list and vector element names that seemed like a good idea at first, but turns out to be more trouble than it's worth. On a related topic, partial matching does not seem to work for the "names" attribute (which I would regard as a good thing :-). However, I'm puzzled why it doesn't work, because the code in do_attr() seems to try hard to make it work. Can anybody explain why? E.g.: > attr(x, "names") [1] "a" "b" > attr(x, "nam") NULL > sessionInfo() R version 2.4.1 (2006-12-18) i386-pc-mingw32 locale: LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_MONETARY=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252 attached base packages: [1] "stats" "graphics" "grDevices" "utils" "datasets" "methods" [7] "base" > -- Tony Plate __ R-dev
[Rd] environment confusion
I'm in a bit beyond my depth with environments and such. The environment of a particular function, which I've set so it should have the things it needs, seems to be getting "lost" at some point during a call sequence. It's hard to come up with a _simple_ reproducible example, although if anyone's sufficiently interested I can post the package somewhere -- with the package installed, it's only a few steps to reproduce the example. I end up, deep in the process of trying to compute a likelihood profile, with the following situation: I want to evaluate "call": call mle2(minuslogl = function (lmu = NULL, ltheta = NULL) { if (!is.null(parameters)) { pars <- unlist(as.list(match.call())[-1]) for (i in seq(along = parameters)) { assign(vars[i], mmats[[i]] %*% pars[vpos[[i]]]) } } arglist1 <- lapply(arglist1, eval, envir = data, enclos = sys.frame(sys.nframe())) r <- -sum(do.call(ddistn, arglist1)) r }, start = list(lmu = -2.16316747342067, ltheta = 2.30970721353114), fixed = list(lmu = -2.18543734742826)) The function appears to have the right stuff in its environment: Browse[1]> ls(envir=environment(call$minuslogl)) [1] "arglist1" "ddistn" "mmats""vars" "vpos" But evaluating the call, even with the environment set to the environment of the internal function, doesn't seem to work Browse[1]> eval(call,envir=environment(call$minuslogl)) Error during wrapup: object "arglist1" not found The full stack looks like this: > profile(m0f,skiperrs=FALSE) Error in is.vector(X) : object "arglist1" not found Enter a frame number, or 0 to exit 1: profile(m0f, skiperrs = FALSE) 2: profile(m0f, skiperrs = FALSE) 3: .local(fitted, ...) 4: onestep(step) 5: eval.parent(call, 2) 6: eval(expr, p) 7: eval(expr, envir, enclos) 8: mle2(minuslogl = function (lmu = NULL, ltheta = NULL) 9: optim(start, objectivefunction, method = method, hessian = TRUE, ...) 10: function (par) 11: fn(par, ...) 12: do.call("minuslogl", args, envir = environment(minuslogl)) 13: minuslogl(ltheta = 2.30970721353114, lmu = -2.18543734742826) 14: lapply(arglist1, eval, envir = data, enclos = sys.frame(sys.nframe())) 15: is.vector(X) Selection: Anyone have any ideas/directions/leading questions? cheers Ben Bolker signature.asc Description: OpenPGP digital signature __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Problem with the 'hist' function
Hi, I am using the following R version: > R version 2.4.1 (2006-12-18) > Copyright (C) 2006 The R Foundation for Statistical Computing > ISBN 3-900051-07-0 I believe I found a bug in the 'hist' function, when 'probability=TRUE'. I looked in the archives and I came across problems with the 'hist' functions (e.g., bug PR# 944, posted in 2001), however, a quick search did not find the exact problem I a found. A brief description of the issue follows. > z<-rnorm(10) > z [1] 0.51649608 -0.20676010 0.65951365 0.46733006 0.02084361 0.18323525 [7] -0.21522566 0.29597667 0.81549448 0.26252625 > hist(z,breaks=seq(-1,1,by=.25)) > hist(z,breaks=seq(-1,1,by=.25),probability=TRUE) I think the values on the Y axis are messed up, e.g., it should top at 0.3 (relative frequency) for the bin [0.25 0.50). How is 'Density' computed? best regards, Edo - Edo Airoldi, Ph.D. Department of Computer Science & Lewis-Sigler Institute for Integrative Genomics Princeton University, NJ 08544 609-258-8326 (lab phone) 609-258-8004 (fax) __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problem with the 'hist' function
On 2/14/2007 5:51 PM, Edo Airoldi wrote: > Hi, I am using the following R version: > > > R version 2.4.1 (2006-12-18) > > Copyright (C) 2006 The R Foundation for Statistical Computing > > ISBN 3-900051-07-0 > > I believe I found a bug in the 'hist' function, when > 'probability=TRUE'. I looked in the archives and I came across > problems with the 'hist' functions (e.g., bug PR# 944, posted in > 2001), however, a quick search did not find the exact problem I a > found. A brief description of the issue follows. > > > z<-rnorm(10) > > z > [1] 0.51649608 -0.20676010 0.65951365 0.46733006 0.02084361 > 0.18323525 > [7] -0.21522566 0.29597667 0.81549448 0.26252625 > > hist(z,breaks=seq(-1,1,by=.25)) > > hist(z,breaks=seq(-1,1,by=.25),probability=TRUE) > > I think the values on the Y axis are messed up, e.g., it should top > at 0.3 (relative frequency) for the bin [0.25 0.50). How is 'Density' > computed? It's not relative frequency, it's density: relative frequency per unit of x. The upper limit would be 4 with a step size of 0.25 (if all the observations fell in one interval). Duncan Murdoch __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] xlsReadWrite Pro and embedding objects and files in Excel worksheets
Hi Mark et al. 2007/2/12, Hans-Peter <[EMAIL PROTECTED]>: > Yes, but I started with the update of the free version and got delayed > there (not that I didn't know that paying customers should be > prefered...) - I'll update the pro version right now and you should > have it until Wednesday, maybe tomorrow. Another person asked to write > formulas directly, it will be included also. I have now uploaded the new version 1.1.0 which runs all my tests fine. It will be publicly linked at the website as soon as the free version is also ready and after some more testing and beautifying of the help text. Downloads: - Program: http://treetron.googlepages.com/xlsReadWritePro_1.1.0.zip - Testscripts (e.g. formula and link): http://treetron.googlepages.com/xlsReadWritePro_TestData_1.1.0.zip - Update-msg pro: http://treetron.googlepages.com/UpdateMsg_xlsReadWritePro_1.1.0.txt - Update-msg free version: http://treetron.googlepages.com/UpdateMsg_xlsReadWrite_1.3.0.txt (draft, I have to finish some details) Regards, Hans-Peter PS: I bcc the email to some people who made suggestions. Unfortunately the update of the free version took longer as planned. I have to be more prudent with giving time indications... __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel