Re: [Rd] inconsistency or bug in coef() (PR#9358)
On Mon, 13 Nov 2006, [EMAIL PROTECTED] wrote: > tmp <- data.frame(x=c(1,1), > y=c(1,2)) > > tmp.lm <- lm(y ~ x, data=tmp) > summary(tmp.lm) > > coef(summary(tmp.lm)) > > ## I consider this to be a bug. Since summary(tmp.lm) gives > ## two rows for the coefficients, I believe the coef() function > ## should also give two rows. That claim is false: it is print.summary.lm that is giving two lines, not the result of summary.lm: try unclass(summary(tmp.lm)) This is also clear from the Value section of ?summary.lm, whose See Also says Function 'coef' will extract the matrix of coefficients with standard errors, t-statistics and p-values. The point is that the print method is making use of both the $coefficients and the $aliased components. I really do think this is clear from reading the help page: did you actually cross-check before sending a bug report? > > > >> summary(tmp.lm) > > Call: > lm(formula = y ~ x, data = tmp) > > Residuals: > 12 > -0.5 0.5 > > Coefficients: (1 not defined because of singularities) >Estimate Std. Error t value Pr(>|t|) > (Intercept) 1.50.5 30.205 > x NA NA NA NA > > Residual standard error: 0.7071 on 1 degrees of freedom > >> coef(summary(tmp.lm)) >Estimate Std. Error t value Pr(>|t|) > (Intercept) 1.50.5 3 0.2048328 >> >> version > _ > platform i386-pc-mingw32 > arch i386 > os mingw32 > system i386, mingw32 > status > major 2 > minor 4.0 > year 2006 > month 10 > day03 > svn rev39566 > language R > version.string R version 2.4.0 (2006-10-03) >> > > > ## this is a related problem > > tmp <- data.frame(x=c(1,2), > y=c(1,2)) > > tmp.lm <- lm(y ~ x, data=tmp) > summary(tmp.lm) > > coef(summary(tmp.lm)) > > ## Here the summary() give NA for the values that can't be > ## calculated and the coef() function gives NaN. I think both > ## functions should return the same result. > > >> summary(tmp.lm) > > Call: > lm(formula = y ~ x, data = tmp) > > Residuals: > ALL 2 residuals are 0: no residual degrees of freedom! > > Coefficients: >Estimate Std. Error t value Pr(>|t|) > (Intercept)0 NA NA NA > x 1 NA NA NA > > Residual standard error: NaN on 0 degrees of freedom > Multiple R-Squared: 1, Adjusted R-squared: NaN > F-statistic: NaN on 1 and 0 DF, p-value: NA > >> >> coef(summary(tmp.lm)) >Estimate Std. Error t value Pr(>|t|) > (Intercept)0NaN NaN NaN > x 1NaN NaN NaN >> >> > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] bug in lrect [lattice]?
In lattice version 0.14-11 (2006/10/23), there appears to be a small bug in lrect [lattice]: border is set to NULL accidentally. It is easily fixed by adding one line to lrect -- see example and hack below ... I wasn't sure if this should be submitted as an R bug since it is part of a package, but a core/recommended package ... Ben Bolker x = runif(100) y = runif(100) my.lrect <- function (xleft, ybottom, xright, ytop, x = (xleft + xright)/2, y = (ybottom + ytop)/2, width = xright - xleft, height = ytop - ybottom, col = "transparent", border = "black", lty = 1, lwd = 1, alpha = 1, just = "center", hjust = NULL, vjust = NULL, ...) { if (missing(border)) ## this line fixes it border <- if (all(is.na(border))) "transparent" else if (is.logical(border)) { if (border) "black" else "transparent" } grid:::grid.rect(x = x, y = y, width = width, height = height, default.units = "native", just = just, hjust = hjust, vjust = vjust, gp = grid:::gpar(fill = col, col = border, lty = lty, lwd = lwd, alpha = alpha, ...)) } ranrect <- function(s,fun=lrect) { coords <- runif(2,s/2,1-s/2) fun(coords[1]-s/2,coords[2]-s/2,coords[1]+s/2,coords[2]+s/2, border="red",lwd=2) } library(lattice) ## with original lrect: border is black rather than red xyplot(y~x, panel=function(...) { panel.xyplot(...) ranrect(s=0.12) }) ## new lrect: works xyplot(y~x, panel=function(...) { panel.xyplot(...) ranrect(s=0.12,fun=my.lrect) }) version _ platform i686-pc-linux-gnu arch i686 os linux-gnu system i686, linux-gnu status major 2 minor 4.0 year 2006 month 10 day03 svn rev39566 language R version.string R version 2.4.0 (2006-10-03) signature.asc Description: OpenPGP digital signature __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] inconsistency or bug in coef() (PR#9358)
I am fascinated. I can accept the argument that suppressing aliased values is a valid behavior of the program. The defense of the inconsistency between the two displays surprises me. Yes, Brian, I checked. That's why I offered the option of an inconsistency. I do believe strongly that the coef section of summary and the coef function should give identical results. I still think the inconsistency is a bug. I think the author of the print.summary.lm did the right thing by showing the requested coef for the x variable and giving it a missing value. Rich __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] bug in acf (PR#9360)
Full_Name: Ian McLeod Version: 2.3.1 OS: Windows Submission from: (NULL) (129.100.76.136) > There is a simple bug in acf as shown below: > > z <- 1 > acf(z,lag.max=1,plot=FALSE) > Error in acf(z, lag.max = 1, plot = FALSE) : >'lag.max' must be at least 1 > This is certainly a bug. There are two problems: (i) the error message is wrong since lag.max is set to 1. Perhaps, if the function acf can not be used for in this situaiton, a different error message would be more appropriate. I understand why this might be done but I don't think it is the best approach. (ii) Please look at the function GetB which is attached. This is part a computation for a fast algorithm for exact mle of mean. Usually phi here are the coefficients from a high order AR but when I tried for AR(1) I got the error message. So the workaround is given. Notice that I use: p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) so what I expect to get when p=length(phi)=1 is just phi^2. This is what happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have acf="correlation" and demean=TRUE then one gets 0/0 which should be defined as 1 in this situation. Probably if the R authors just want to use acf for data analysis they may simply choose to require length(x)>1 in acf(x,...) although I don't see the harm in my suggestion either. Ian McLeod __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in acf (PR#9360)
On 11/13/2006 10:30 AM, [EMAIL PROTECTED] wrote: > Full_Name: Ian McLeod > Version: 2.3.1 > OS: Windows > Submission from: (NULL) (129.100.76.136) > > >> There is a simple bug in acf as shown below: >> >> z <- 1 >> acf(z,lag.max=1,plot=FALSE) >> Error in acf(z, lag.max = 1, plot = FALSE) : >>'lag.max' must be at least 1 >> > This is certainly a bug. I'd say it's a documentation bug, rather than a code bug. > > There are two problems: > > (i) the error message is wrong since lag.max is set to 1. Perhaps, if the > function acf can not be used for in this situaiton, a different error message > would be more appropriate. I understand why this might be done but I don't > think it is the best approach. What happens is that lag.max is reduced to length(z)-1, which is zero in your case. This change should be mentioned in the documentation. > (ii) Please look at the function GetB which is attached. This is part a > computation for a fast algorithm for exact mle of mean. Usually phi here are > the coefficients from a high order AR but when I tried for AR(1) I got the > error > message. So the workaround is given. Notice that I use: > > p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) > > so what I expect to get when p=length(phi)=1 is just phi^2. This is what > happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have > acf="correlation" and demean=TRUE then one gets 0/0 which should be defined > as 1 > in this situation. I don't think that's a reasonable expectation. You've got an empty sum in the formula for the lag 1 autocovariance: sum_{i=1}^0 phi_i phi_{i+1} R is assuming that's not what you meant and is reporting it as an error. If it gave you any value, it should be zero, not phi^2. Duncan Murdoch > > Probably if the R authors just want to use acf for data analysis they may > simply > choose to require length(x)>1 in acf(x,...) although I don't see the harm in > my > suggestion either. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] readBin(what="character", n=overcount)->extra "" in result (PR#9361)
Full_Name: Bill Dunlap Version: 2.4.0 OS: Windows XP Submission from: (NULL) (208.252.71.182) When I use readBin() to read an unknown number of null-terminated strings from a file by supplying an overcount as the n= argument, it appends an empty string to the result. > tf<-tempfile() > strings<-c("One","Two","Three") > writeBin(strings, con=tf) > readBin(con=tf,what="character",n=100) # expect "One","Two","Three" only [1] "One" "Two" "Three" "" > readBin(con=tf,what="integer",size=1,signed=FALSE,n=100) [1] 79 110 101 0 84 119 111 0 84 104 114 101 101 0 > unlink(tf) > version _ platform i386-pc-mingw32 arch i386 os mingw32 system i386, mingw32 status major 2 minor 4.0 year 2006 month 10 day03 svn rev39566 language R version.string R version 2.4.0 (2006-10-03) __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] wishlist: xlim in lines.polynomial (PR#9362)
Full_Name: Tamas K Papp Version: 2.4.0 OS: linux Submission from: (NULL) (140.180.166.160) I was using the lines.polynomial method for plotting piecewise polynomials (parts of splines). I needed a feature to limit the range of plotting using a parameter given to the function (as opposed to par("usr")). I think that the following changes would be a nice addition: lines.polynomial <- function (x, len = 100, xlim=par("usr")[1:2], ylim=par("usr")[3:4],...) { p <- x x <- seq(xlim[1], xlim[2], len = len) y <- predict(p, x) y[y <= ylim[1] | y >= ylim[2]] <- NA lines(x, y, ...) } Package: polynom Version: 1.2-3 Date: 2006-09-09 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in lrect [lattice]?
On 11/13/06, Ben Bolker <[EMAIL PROTECTED]> wrote: > > In lattice version 0.14-11 (2006/10/23), there appears > to be a small bug in lrect [lattice]: border is set to NULL > accidentally. Yes, this bug was probably introduced while trying to fix PR#9307. But that was three whole weeks ago. Run update.packages() and try again :-) -Deepayan __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in acf (PR#9360)
> I don't think that's a reasonable expectation. You've got an empty sum > in the formula for the lag 1 autocovariance: > > sum_{i=1}^0 phi_i phi_{i+1} > > R is assuming that's not what you meant and is reporting it as an error. > If it gave you any value, it should be zero, not phi^2. > I agree the empty sum which is the lag 1 autocovariance should be zero but this is the SECOND term in $acf output. For the first term, 1) if demean=F, it is variance which is phi^2 as I suggested 2) if demean=T, it is the variance/variance = 0/0 which I said should best be 1 - Original Message - From: "Duncan Murdoch" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: ; <[EMAIL PROTECTED]> Sent: Monday, November 13, 2006 11:22 AM Subject: Re: [Rd] bug in acf (PR#9360) > On 11/13/2006 10:30 AM, [EMAIL PROTECTED] wrote: >> Full_Name: Ian McLeod >> Version: 2.3.1 >> OS: Windows >> Submission from: (NULL) (129.100.76.136) >> >> >>> There is a simple bug in acf as shown below: >>> >>> z <- 1 >>> acf(z,lag.max=1,plot=FALSE) >>> Error in acf(z, lag.max = 1, plot = FALSE) : >>>'lag.max' must be at least 1 >>> >> This is certainly a bug. > > I'd say it's a documentation bug, rather than a code bug. >> >> There are two problems: >> >> (i) the error message is wrong since lag.max is set to 1. Perhaps, if the >> function acf can not be used for in this situaiton, a different error message >> would be more appropriate. I understand why this might be done but I don't >> think it is the best approach. > > What happens is that lag.max is reduced to length(z)-1, which is zero in > your case. This change should be mentioned in the documentation. > >> (ii) Please look at the function GetB which is attached. This is part a >> computation for a fast algorithm for exact mle of mean. Usually phi here are >> the coefficients from a high order AR but when I tried for AR(1) I got the >> error >> message. So the workaround is given. Notice that I use: >> >> p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) >> >> so what I expect to get when p=length(phi)=1 is just phi^2. This is what >> happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have >> acf="correlation" and demean=TRUE then one gets 0/0 which should be defined >> as 1 >> in this situation. > > I don't think that's a reasonable expectation. You've got an empty sum > in the formula for the lag 1 autocovariance: > > sum_{i=1}^0 phi_i phi_{i+1} > > R is assuming that's not what you meant and is reporting it as an error. > If it gave you any value, it should be zero, not phi^2. > > Duncan Murdoch > >> >> Probably if the R authors just want to use acf for data analysis they may >> simply >> choose to require length(x)>1 in acf(x,...) although I don't see the harm in >> my >> suggestion either. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] wishlist: xlim in lines.polynomial (PR#9362)
'polynom' is a contributed package, so using R-bugs is inappropriate. Please do read the R FAQ and note what it says about contributed packages. Since in particular it is an R port of an S(-PLUS) library section, author != maintainer and probably both should agree to this. On Mon, 13 Nov 2006, [EMAIL PROTECTED] wrote: > Full_Name: Tamas K Papp > Version: 2.4.0 > OS: linux > Submission from: (NULL) (140.180.166.160) > > > I was using the lines.polynomial method for plotting piecewise polynomials > (parts of splines). I needed a feature to limit the range of plotting using a > parameter given to the function (as opposed to par("usr")). I think that the > following changes would be a nice addition: > > lines.polynomial <- function (x, len = 100, xlim=par("usr")[1:2], > ylim=par("usr")[3:4],...) > { >p <- x >x <- seq(xlim[1], xlim[2], len = len) >y <- predict(p, x) >y[y <= ylim[1] | y >= ylim[2]] <- NA >lines(x, y, ...) > } > > Package: polynom > Version: 1.2-3 > Date: 2006-09-09 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in acf (PR#9360) S-Plus Addendum
Here is the result from S-Plus V.8. > acf(1,lag.max=1,type="covariance",plot=F) Call: acf(x = 1, lag.max = 1, type = "covariance", plot = F) Autocovariances matrix: lag X2 1 0 0 2 1 0 Their function does not support the demean option, so the variance is zero but they set 0/0 to 0 instead of 1. The empty sum is zero (that is a standard mathematical convention which S+ follows). I still think *one* would be better for the correlation at lag *zero* since the autocorrelation function at lag *zero* is defined as this in every other case. As I tried to explain before for some algorithms involving the computation of correlations and convolutions, it is more convenient to have this first term (lag zero) always set to *one*. Otherwise it has be treated as a special case as in my GetB function which I showed you. I guess it is not that important but I think it should be treated as more than a documentation bug!! Knuth once said that after TeX reach Version 3.14 all bugs would just be declared "special features". That would be another way to handle too. For "canned data analysis" as opposed to "programming", it really doesn't matter at all how you handle this problem. Ian McLeod - Original Message - From: "Duncan Murdoch" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: ; <[EMAIL PROTECTED]> Sent: Monday, November 13, 2006 11:22 AM Subject: Re: [Rd] bug in acf (PR#9360) > On 11/13/2006 10:30 AM, [EMAIL PROTECTED] wrote: >> Full_Name: Ian McLeod >> Version: 2.3.1 >> OS: Windows >> Submission from: (NULL) (129.100.76.136) >> >> >>> There is a simple bug in acf as shown below: >>> >>> z <- 1 >>> acf(z,lag.max=1,plot=FALSE) >>> Error in acf(z, lag.max = 1, plot = FALSE) : >>>'lag.max' must be at least 1 >>> >> This is certainly a bug. > > I'd say it's a documentation bug, rather than a code bug. >> >> There are two problems: >> >> (i) the error message is wrong since lag.max is set to 1. Perhaps, if the >> function acf can not be used for in this situaiton, a different error message >> would be more appropriate. I understand why this might be done but I don't >> think it is the best approach. > > What happens is that lag.max is reduced to length(z)-1, which is zero in your > case. This change should be mentioned in the > documentation. > >> (ii) Please look at the function GetB which is attached. This is part a >> computation for a fast algorithm for exact mle of mean. Usually phi here are >> the coefficients from a high order AR but when I tried for AR(1) I got the >> error >> message. So the workaround is given. Notice that I use: >> p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) >> >> so what I expect to get when p=length(phi)=1 is just phi^2. This is what >> happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have >> acf="correlation" and demean=TRUE then one gets 0/0 which should be defined >> as 1 >> in this situation. > > I don't think that's a reasonable expectation. You've got an empty sum in > the formula for the lag 1 autocovariance: > > sum_{i=1}^0 phi_i phi_{i+1} > > R is assuming that's not what you meant and is reporting it as an error. If > it gave you any value, it should be zero, not phi^2. > > Duncan Murdoch > >> >> Probably if the R authors just want to use acf for data analysis they may >> simply >> choose to require length(x)>1 in acf(x,...) although I don't see the harm in >> my >> suggestion either. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] readBin(what="character", n=overcount)->extra "" in result (PR#9361)
On Mon, 13 Nov 2006 [EMAIL PROTECTED] wrote: > When I use readBin() to read an unknown number > of null-terminated strings from a file by supplying > an overcount as the n= argument, it appends an empty > string to the result. > > > tf<-tempfile() > > strings<-c("One","Two","Three") > > writeBin(strings, con=tf) > > readBin(con=tf,what="character",n=100) # expect "One","Two","Three" only > [1] "One" "Two" "Three" "" I think the fix is to src/library/connections.c, where m (the number of items read) is initialized to 1 instead of 0. It is later used to shorten the vector of read objects: if(m < n) { PROTECT(ans = lengthgets(ans, m)); UNPROTECT(1); } *** connections.c-orig 2006-09-13 19:05:06.0 -0700 --- connections.c 2006-11-13 11:46:35.0 -0800 *** *** 2740,2746 if(!strcmp(what, "character")) { SEXP onechar; PROTECT(ans = allocVector(STRSXP, n)); ! for(i = 0, m = i+1; i < n; i++) { onechar = isRaw ? rawOneString(bytes, nbytes, &np) : readOneString(con); if(onechar != R_NilValue) { --- 2740,2746 if(!strcmp(what, "character")) { SEXP onechar; PROTECT(ans = allocVector(STRSXP, n)); ! for(i = 0, m = 0; i < n; i++) { onechar = isRaw ? rawOneString(bytes, nbytes, &np) : readOneString(con); if(onechar != R_NilValue) { __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] readBin(what="character", n=overcount)->extra "" in result (PR#9363)
On Mon, 13 Nov 2006 [EMAIL PROTECTED] wrote: > When I use readBin() to read an unknown number > of null-terminated strings from a file by supplying > an overcount as the n= argument, it appends an empty > string to the result. > > > tf<-tempfile() > > strings<-c("One","Two","Three") > > writeBin(strings, con=tf) > > readBin(con=tf,what="character",n=100) # expect "One","Two","Three" only > [1] "One" "Two" "Three" "" I think the fix is to src/library/connections.c, where m (the number of items read) is initialized to 1 instead of 0. It is later used to shorten the vector of read objects: if(m < n) { PROTECT(ans = lengthgets(ans, m)); UNPROTECT(1); } *** connections.c-orig 2006-09-13 19:05:06.0 -0700 --- connections.c 2006-11-13 11:46:35.0 -0800 *** *** 2740,2746 if(!strcmp(what, "character")) { SEXP onechar; PROTECT(ans = allocVector(STRSXP, n)); ! for(i = 0, m = i+1; i < n; i++) { onechar = isRaw ? rawOneString(bytes, nbytes, &np) : readOneString(con); if(onechar != R_NilValue) { --- 2740,2746 if(!strcmp(what, "character")) { SEXP onechar; PROTECT(ans = allocVector(STRSXP, n)); ! for(i = 0, m = 0; i < n; i++) { onechar = isRaw ? rawOneString(bytes, nbytes, &np) : readOneString(con); if(onechar != R_NilValue) { __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in acf (PR#9360)
On 11/13/2006 2:24 PM, A.I. McLeod wrote: >> I don't think that's a reasonable expectation. You've got an empty sum >> in the formula for the lag 1 autocovariance: >> >> sum_{i=1}^0 phi_i phi_{i+1} >> >> R is assuming that's not what you meant and is reporting it as an error. >> If it gave you any value, it should be zero, not phi^2. >> > > I agree the empty sum which is the lag 1 autocovariance should be zero but > this is the SECOND term in $acf output. > For the first term, > 1) if demean=F, it is variance which is phi^2 as I suggested > 2) if demean=T, it is the variance/variance = 0/0 which I said should best be > 1 Okay, I see what you mean now. Yes, I agree acf should return lag 0 autocorrelations and autocovariances even for a series of length 1. I'll take a look at the code. Duncan Murdoch > > > - Original Message - > From: "Duncan Murdoch" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Cc: ; <[EMAIL PROTECTED]> > Sent: Monday, November 13, 2006 11:22 AM > Subject: Re: [Rd] bug in acf (PR#9360) > > >> On 11/13/2006 10:30 AM, [EMAIL PROTECTED] wrote: >>> Full_Name: Ian McLeod >>> Version: 2.3.1 >>> OS: Windows >>> Submission from: (NULL) (129.100.76.136) >>> >>> There is a simple bug in acf as shown below: z <- 1 acf(z,lag.max=1,plot=FALSE) Error in acf(z, lag.max = 1, plot = FALSE) : 'lag.max' must be at least 1 >>> This is certainly a bug. >> >> I'd say it's a documentation bug, rather than a code bug. >>> >>> There are two problems: >>> >>> (i) the error message is wrong since lag.max is set to 1. Perhaps, if the >>> function acf can not be used for in this situaiton, a different error >>> message >>> would be more appropriate. I understand why this might be done but I don't >>> think it is the best approach. >> >> What happens is that lag.max is reduced to length(z)-1, which is zero in >> your case. This change should be mentioned in the documentation. >> >>> (ii) Please look at the function GetB which is attached. This is part a >>> computation for a fast algorithm for exact mle of mean. Usually phi here >>> are >>> the coefficients from a high order AR but when I tried for AR(1) I got the >>> error >>> message. So the workaround is given. Notice that I use: >>> >>> p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) >>> >>> so what I expect to get when p=length(phi)=1 is just phi^2. This is what >>> happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have >>> acf="correlation" and demean=TRUE then one gets 0/0 which should be defined >>> as 1 >>> in this situation. >> >> I don't think that's a reasonable expectation. You've got an empty sum >> in the formula for the lag 1 autocovariance: >> >> sum_{i=1}^0 phi_i phi_{i+1} >> >> R is assuming that's not what you meant and is reporting it as an error. >> If it gave you any value, it should be zero, not phi^2. >> >> Duncan Murdoch >> >>> >>> Probably if the R authors just want to use acf for data analysis they may >>> simply >>> choose to require length(x)>1 in acf(x,...) although I don't see the harm >>> in my >>> suggestion either. > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] readBin(what="character", n=overcount)->extra "" in result (PR#9361)
That's arrived after I had committed an identical fix. It's a relic of a time when m had a different meaning. Thanks! Brian On Mon, 13 Nov 2006, Bill Dunlap wrote: > On Mon, 13 Nov 2006 [EMAIL PROTECTED] wrote: > >> When I use readBin() to read an unknown number >> of null-terminated strings from a file by supplying >> an overcount as the n= argument, it appends an empty >> string to the result. >> >>> tf<-tempfile() >>> strings<-c("One","Two","Three") >>> writeBin(strings, con=tf) >>> readBin(con=tf,what="character",n=100) # expect "One","Two","Three" only >> [1] "One" "Two" "Three" "" > > I think the fix is to src/library/connections.c, where > m (the number of items read) is initialized to 1 instead > of 0. It is later used to shorten the vector of read > objects: >if(m < n) { >PROTECT(ans = lengthgets(ans, m)); >UNPROTECT(1); >} > > *** connections.c-orig2006-09-13 19:05:06.0 -0700 > --- connections.c 2006-11-13 11:46:35.0 -0800 > *** > *** 2740,2746 > if(!strcmp(what, "character")) { > SEXP onechar; > PROTECT(ans = allocVector(STRSXP, n)); > ! for(i = 0, m = i+1; i < n; i++) { > onechar = isRaw ? rawOneString(bytes, nbytes, &np) > : readOneString(con); > if(onechar != R_NilValue) { > --- 2740,2746 > if(!strcmp(what, "character")) { > SEXP onechar; > PROTECT(ans = allocVector(STRSXP, n)); > ! for(i = 0, m = 0; i < n; i++) { > onechar = isRaw ? rawOneString(bytes, nbytes, &np) > : readOneString(con); > if(onechar != R_NilValue) { > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug in acf (PR#9360)
On 11/13/2006 2:24 PM, A.I. McLeod wrote: >> I don't think that's a reasonable expectation. You've got an empty sum >> in the formula for the lag 1 autocovariance: >> >> sum_{i=1}^0 phi_i phi_{i+1} >> >> R is assuming that's not what you meant and is reporting it as an error. >> If it gave you any value, it should be zero, not phi^2. >> > > I agree the empty sum which is the lag 1 autocovariance should be zero but > this is the SECOND term in $acf output. > For the first term, > 1) if demean=F, it is variance which is phi^2 as I suggested > 2) if demean=T, it is the variance/variance = 0/0 which I said should best be > 1 I've put this change into R-devel and R-patched now. Duncan Murdoch > > > - Original Message - > From: "Duncan Murdoch" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Cc: ; <[EMAIL PROTECTED]> > Sent: Monday, November 13, 2006 11:22 AM > Subject: Re: [Rd] bug in acf (PR#9360) > > >> On 11/13/2006 10:30 AM, [EMAIL PROTECTED] wrote: >>> Full_Name: Ian McLeod >>> Version: 2.3.1 >>> OS: Windows >>> Submission from: (NULL) (129.100.76.136) >>> >>> There is a simple bug in acf as shown below: z <- 1 acf(z,lag.max=1,plot=FALSE) Error in acf(z, lag.max = 1, plot = FALSE) : 'lag.max' must be at least 1 >>> This is certainly a bug. >> >> I'd say it's a documentation bug, rather than a code bug. >>> >>> There are two problems: >>> >>> (i) the error message is wrong since lag.max is set to 1. Perhaps, if the >>> function acf can not be used for in this situaiton, a different error >>> message >>> would be more appropriate. I understand why this might be done but I don't >>> think it is the best approach. >> >> What happens is that lag.max is reduced to length(z)-1, which is zero in >> your case. This change should be mentioned in the documentation. >> >>> (ii) Please look at the function GetB which is attached. This is part a >>> computation for a fast algorithm for exact mle of mean. Usually phi here >>> are >>> the coefficients from a high order AR but when I tried for AR(1) I got the >>> error >>> message. So the workaround is given. Notice that I use: >>> >>> p*as.vector(acf(phi,lag.max=p,type="covariance",demean=FALSE,plot=FALSE)$acf) >>> >>> so what I expect to get when p=length(phi)=1 is just phi^2. This is what >>> happens in Mathematica with ListCorrelate[{phi},{phi}]. When you have >>> acf="correlation" and demean=TRUE then one gets 0/0 which should be defined >>> as 1 >>> in this situation. >> >> I don't think that's a reasonable expectation. You've got an empty sum >> in the formula for the lag 1 autocovariance: >> >> sum_{i=1}^0 phi_i phi_{i+1} >> >> R is assuming that's not what you meant and is reporting it as an error. >> If it gave you any value, it should be zero, not phi^2. >> >> Duncan Murdoch >> >>> >>> Probably if the R authors just want to use acf for data analysis they may >>> simply >>> choose to require length(x)>1 in acf(x,...) although I don't see the harm >>> in my >>> suggestion either. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] Problem with file size
[Moved to r-devel for further discussion, bcc:ed r-help for the record] Hi Benilton, it is possible that this has to do with so called unevaluated promises. I had a similar problem a few months ago. I emailed R-devel about it - "[Rd] save() saves extra stuff if object is not evaluated" (Thu May 25 09:19:28 CEST 2006) [ https://stat.ethz.ch/pipermail/r-devel/2006-May/037859.html ], and Luke Tierney gave a very good explanation on what was going on. Since then save() has gained an extra argument 'eval.promises=TRUE', cf. R v2.4.0 NEWS [ http://cran.at.r-project.org/ ]: o save() by default evaluates promise objects. The old behaviour (to save the promise and its evaluation environment) can be obtained by setting the new argument 'eval.promises' to FALSE. (Note that this does not apply to promises embedded in objects, only to top-level objects.) You are already running R v2.4.0 so this should apply to your session already. However, note the last sentense in parentheses; it does not handle promises in embedded objects. You could try to evaluate your environments/promises manually using, say, is.null(env) (as I did in my example in my May message) and see if it makes a difference. Hope this helps Henrik On 11/14/06, Benilton Carvalho <[EMAIL PROTECTED]> wrote: > Hi everyone, > > I have 2 environments (2 different R sessions) as described below: > > Session 1: > > Name of the environment: "CrlmmInfo" > Objects in the environment: > index1: logical index - length 238304 > index2: logical index - length 238304 > priors: list of 4 - (matrix 6x6, 2 vectors of length 6, vector of > length 2) - all num > params: list of 4: > centers [238304 x 3 x 2]: num > scales [238304 x 3 x 2]: num > N [238304 x 3]: num > f0 [scalar]: num > > If I save this environment to a file, I get a file of 23MB. Great. > > Session 2: > Analogous to "Session 1", but replace 238304 by 262264. > > If I save the environment on Session 2, I get a file of 8.4GB. > > I applied object.size on each of the objects in each environment, and > this is what I got: > > For Session 1: > index1: 16204864 > index2: 16204864 > priors: 3336 > params: 74353584 > > For Session 2: > index1: 1049096 > index2: 1049096 > priors: 3336 > params: 81829104 > > Is this increase from 23MB to 8.4GB expected to happen? > > Benilton > > sessionInfo() on both sessions is identical: > > > sessionInfo() > R version 2.4.0 (2006-10-03) > x86_64-unknown-linux-gnu > > locale: > LC_CTYPE=en_US.iso885915;LC_NUMERIC=C;LC_TIME=en_US.iso885915;LC_COLLATE > =en_US.iso885915;LC_MONETARY=en_US.iso885915;LC_MESSAGES=en_US.iso885915 > ;LC_PAPER=en_US.iso885915;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASU > REMENT=en_US.iso885915;LC_IDENTIFICATION=C > > attached base packages: > [1] "methods" "stats" "graphics" "grDevices" "utils" > "datasets" > [7] "base" > > > version > _ > platform x86_64-unknown-linux-gnu > arch x86_64 > os linux-gnu > system x86_64, linux-gnu > status > major 2 > minor 4.0 > year 2006 > month 10 > day03 > svn rev39566 > language R > version.string R version 2.4.0 (2006-10-03) > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R.exp file for building packages
I am trying to port some C code from S-Plus (7.0.6) to R (2.4.0) under Windows XP SP2. I use Visual C++ 6.0 to build my library for S-Plus, so I'd like to stick with that set up, if possible. According to the README.packages file, I need the file R.exp (containing functions exported from R.dll, I'm guessing) to build an R package with VC++. >From what I've read in the archives of this list, R.exp should be installed in $RHOME/src/gnuwin32 if I chose an appropriate option back when I installed R from the binary. I did a full install of R, but R.exp is nowhere to be found. Is this file no longer included in the pre-compiled install for Windows? Is there some other way I can recreate the R.exp file? I am currently trying to hack something using dumpbin /exports "C:\program files\R\R-2.4.0\bin\R.dll" > exports and editing "exports" into an R.exp file...but I'm not too clear on what the file's supposed to look like. Thanks, Chris Green Christopher G. Green (cggreen AT stat.washington.edu) Graduate Student Department of Statistics, Box 354322, Seattle, WA, 98195-4322, U.S.A. http://www.stat.washington.edu/cggreen/ __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R.exp file for building packages
On Mon, 13 Nov 2006, Christopher G. Green (W) wrote: > I am trying to port some C code from S-Plus (7.0.6) to R (2.4.0) under > Windows XP SP2. I use Visual C++ 6.0 to build my library for S-Plus, so I'd > like to stick with that set up, if possible. According to the > README.packages file, I need the file R.exp (containing functions exported > from R.dll, I'm guessing) to build an R package with VC++. > >> From what I've read in the archives of this list, R.exp should be installed > in $RHOME/src/gnuwin32 if I chose an appropriate option back when I > installed R from the binary. I did a full install of R, but R.exp is > nowhere to be found. You are reading about old versions of R: R.exp has not been part of R for about a year. > Is this file no longer included in the pre-compiled install for Windows? Is > there some other way I can recreate the R.exp file? I am currently trying > to hack something using > > dumpbin /exports "C:\program files\R\R-2.4.0\bin\R.dll" > exports > > and editing "exports" into an R.exp file...but I'm not too clear on what > the file's supposed to look like. Please do read more carefully README.packages, which says Using Visual C++ You may if you prefer use Visual C++ to make the DLLs (unless they use Fortran source!). First build the import library Rdll.lib by make R.exp lib /def:R.exp /out:Rdll.lib You generate the file using the first of these lines: it starts LIBRARY R.dll EXPORTS ATTRIB AllDevicesKilled BODY Brent_fmin -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel