Re: [Rd] compress defaults for save() and save.image()
> On Thu, 30 Mar 2006 18:23:11 +0100 (BST), > Prof Brian Ripley (PBR) wrote: > I have changed the default in save() to compress = !ascii. This seems > quite safe, as almost always save() is called explicitly and people will > appreciate that it might take a little time to save large objects (and > depending on your system, compression could even be faster). Very good idea. > Should we also change the default in save.image()? That is almost always > used implicitly, via q(), a menu There are arguments that it is a > more serious change so should not be done at the end of the release cycle, > and also that large .RData files are something people would want to avoid. > BTW, the defaults can be changed via options() (see ?save): has anyone > ever found that useful? Yes, I and many people I know have options(save.defaults=list(compress=TRUE)) in our settings, and I'd like to keep the option version such that I am independent from whatever the function defaults are. > Liaw, Andy wrote: > - I always use save(..., compress=TRUE), without exception. > - The only time I'd use save.image() is when I need to break a remote > connection on short notice. > - I have not used options() to set the default simply because I have not > figured out how exactly to do it. From what I remember, the doc simply says > it can be done, but does not explicitly say how. Umm, there is an explicit example in help(save), I have added a pointer from the main text of the help page. Best, Fritz __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Bug in barplot (PR#8738)
Hi There seems to be a bug in barplot(). It occurs when specifying the ylim argument. It's best shown by example: dat <- matrix(c(1,1,1,2,2,2,3,3,3),ncol=3) barplot(dat, beside=TRUE) X11() barplot(dat, beside=TRUE, ylim=c(1,3)) As can be seem, drawing of the graph seems to discount the margins of the plot. I'm using R 2.2.1 on Windows Thanks Mick __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Bug in barplot (PR#8738)
It is not a bug: please read the help file xpd: logical. Should bars be allowed to go outside region? and note that the default is TRUE. On Fri, 31 Mar 2006, [EMAIL PROTECTED] wrote: > There seems to be a bug in barplot(). It occurs when specifying the > ylim argument. It's best shown by example: > > dat <- matrix(c(1,1,1,2,2,2,3,3,3),ncol=3) > barplot(dat, beside=TRUE) > X11() > barplot(dat, beside=TRUE, ylim=c(1,3)) > > As can be seem, drawing of the graph seems to discount the margins of > the plot. As documented. > I'm using R 2.2.1 on Windows -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fortran and C entry point problem.
You need to tell us what platform you're on, what compilers you're using, etc. The details vary. Duncan Murdoch On 3/30/2006 10:56 PM, Steve Su wrote: > Dear All, > > > > I have seen a number of e mails on this topic but I have not seen a > general solution to date. I have Fortran and C source codes and they > have been compiled successfully using: > > > > R CMD build mypackage > > > > And > > > > R CMD install mypackage > > > > Without error messages. > > > > I then open R and tests out two functions and get: > > > >> pgl(0.2,1,2,3,4) > > Error in .C("gl_fmkl_distfunc", lambdas[1], lambdas[2], lambdas[3], > lambdas[4], : > > C entry point "gl_fmkl_distfunc" not in DLL for package > "mypackage" > >> runif.sobol(10,5) > > Error in .Fortran("sobol", as.double(qn), as.integer(n), > as.integer(dimension), : > > Fortran entry point "sobol_" not in DLL for package "mypackage" > > > > > > Alternatively running C CMD check mypackage will also pick up these > errors when they try to run the examples. > > > > I have used Fortran and C codes already available on CRAN so I am unsure > what is the missing link to make these work? > > > > Can anyone provide an explanation and procedure to over come this > problem? > > > > Thanks. > > > > Steve. > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] R garbage collection
The allocations described as User-Controlled are not part of the GC-managed heap. The ones described as Transient happen to be but that is not user visible; alternate implementations might insure that they are freed as soon as the appropriate context is left. If you are interested in managing the lifetime of heap-allocated objects then you register roots for the GC with R_PreserveObject. luke On Fri, 31 Mar 2006, Prof Brian Ripley wrote: > On Thu, 30 Mar 2006, Jeff Henrikson wrote: > >> r-help, > > [Moved to R-devel.] > >> The R manual lists two types of memory: transient and user-controlled. >> If I have transient blocks reachable from the globals only by traversal >> through user-controlled blocks, will they be correctly preserved? > > I don't understand your terminology, especially 'traversal'. It is not > normal to have either type of block reachable through R objects, and if > you are using something like external pointers, the answer would be no. > >> Secondly, what are the ways to mark user controlled blocks as "roots" >> for the garbage collector, so that transient blocks they reference stay >> uncollected? So far I can only deduce that as long as the answer to my >> first question is yes, I can bind an arbitrary symbol to them in the >> global environment. Is this the best way? > > I think you are referring to blocks allocated by R_alloc. The manual says > > This memory is taken from the heap, and released at the end of the .C, > .Call or .External call. Users can also manage it, by noting the current > position with a call to vmaxget and clearing memory allocated > subsequently by a call to vmaxset. This is only recommended for experts. > > If you want to allocate storage as part of an R object, this is not the > best way to do it (allocVector etc are). It is a side-effect of the > current implementation that memory allocated by R_alloc which is made part > of an object will be protected for the lifetime of that object, but this > is not documented and should not be relied on. (I am thinking if for > example a block is made into a CHARSXP 'by hand', but the documented route > is mkChar which makes a copy.) > >> Jeff Henrikson >> >> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > > Please do, including the question `which list': this clearly belongs on > R-devel. > > -- Luke Tierney Chair, Statistics and Actuarial Science Ralph E. Wareham Professor of Mathematical Sciences University of Iowa Phone: 319-335-3386 Department of Statistics andFax: 319-335-3017 Actuarial Science 241 Schaeffer Hall email: [EMAIL PROTECTED] Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Run t-test on multiple time series
I have two sets of time-series that I imported from Excel using RODBC and placed in "securities" and "factors". What I need to do is generate t-scores for each security-factor pair. I tried the following: t1 <- t.test(securities[,3:42], factors[,2:41], var.equal=TRUE) However, this gives me just a single t-score. Note: the numeric values that I need are in indices 3 to 42 for securities and 2 to 41 for factors. I'm pretty new with R so I was wondering if I could use something similar to the following where I can run some function that would calculate against all values (all meaning each security or each factor): > x <- c(1, 2, 3, 4, 5) > y <- x+1 > y [1] 2 3 4 5 6 Or do I need to iterate through the list of securities and factors and generate a t-score for each pair one at a time? Thanks, - Bruce ** Please be aware that, notwithstanding the fact that the pers...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Writing character vectors with embedded nulls to a connection
The following approach sobject <- charToRaw(serialize(object,NULL)) len <- length(sobject) writeBin(sobject, outcon) would appear to work. As from 2.3.0 you will then be able to do unserialize(readBin(incon, "raw", n=len)) On Fri, 31 Mar 2006, Prof Brian Ripley wrote: > I think you should be using a raw type to hold such data in R. It is not > intentional that readChar handles embedded nuls (and in fact it might not in > an MBCS). > > As ?serialize says > > For 'serialize', 'NULL' unless 'connection=NULL', when the result > is stored in the first element of a character vector (but is not a > normal character string unless 'ascii = TRUE' and should not be > processed except by 'unserialize'). > > so you have been told this is not intended to work as you tried. > > serialize predates the raw type, or it would have made use of it. In these > days of MBCS character strings it is increasingly unsafe to use them to hold > anything other than valid character data. > > > On Thu, 30 Mar 2006, Jeffrey Horner wrote: > >> Is this possible? I've tried both writeChar() and writeBin() to no avail. >> >> My goal is to serialize(ascii=FALSE) an object to a connection but >> determine the size of the serialized object before hand: >> >> sobject <- serialize(object,NULL,ascii=FALSE) >> len <- nchar(sobject) >> # >> # run some code here to notify listener on other end of connection >> # how many bytes I'm getting ready to send >> # >> writeChar(sobject,con) >> >> The other option is to serialize twice: >> >> len <- nchar(serialize(object,NULL,ascii=FALSE)) >> # >> # run some code here to notify listener on other end of connection >> # how many bytes I'm getting ready to send >> # >> serialize(object,con,ascii=FALSE) >> >> Object stores, like memcache (http://danga.com/memcached/), need to know >> object sizes before storing. RDBMS's which support large objects (CLOBS >> or BLOBS) don't nececarilly need to know object sizes before-hand, but >> they do have max column size limits which must be honored. >> >> BTW, readchar() can read strings with embedded nulls; I figured >> writeChar() should be able to write them. >> >> > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] package?
When I use package? the author field gets reproduced twice, once with the \author{ } string and a secod time formatted. Also, would it be possible to make package? find the overview without the package being attached, or at least give a more informative error message. Paul Gilbert La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Segfault with too many menu items on Rgui
Hi all, In the CHANGES file for R-2.3.0alpha, there is the following statement: winMenuAdd() now has no limits on the number of menus or items, and names are now limited to 500 (not 50) bytes. However, I can reproducibly get a segfault using this (admittedly silly) example: for( i in 1:5) winMenuAdd(paste("Test", letters[i], sep="")) for(i in 1:5) for(j in 1:24) winMenuAddItem(paste("Test", letters[i], sep=""), as.character(j), paste(rep(letters[j], 4), collapse="")) This is probably almost never a problem, but many Bioconductor packages have vignettes that are added to a 'Vignettes' menu item. If you load enough of these packages you will get a segfault. > version _ platform i386-pc-mingw32 arch i386 os mingw32 system i386, mingw32 status alpha major 2 minor 3.0 year 2006 month 03 day29 svn rev37607 language R version.string Version 2.3.0 alpha (2006-03-29 r37607) Best, Jim James W. MacDonald, M.S. Biostatistician Affymetrix and cDNA Microarray Core University of Michigan Cancer Center 1500 E. Medical Center Drive 7410 CCGC Ann Arbor MI 48109 734-647-5623 ** Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Sys.sleep() burns up CPU on Solaris 8
I noticed that R was burning up 100% of a CPU when a call to Sys.sleep() was made. Upon investigation, I discovered that R_checkActivityEx() in src/unix/sys-std.c was putting the entire timeout (in usec) into the struct timeval tv_usec member, leaving tv_sec set to 0. I don't know about other unix variants, but Solaris requires that the timeout value be normalized (i.e. 0 <= tv_usec < 100). Because a value greater than 100 was being placed in tv_usec, select() was returning an error (EINVAL) immediately, causing R_checkActivityEx() to return immediately. This caused do_syssleep() to loop continuously until the desired elapsed time had evolved. A rather sleepless sleep ;-). The following patch against R-2.2.1 fixes the problem. Note that the test of the return value for R_SelectEx() was also incorrect; although harmless, it was treating the error return (-1) as a successful return. stephen pope [EMAIL PROTECTED] Here's the patch. [EMAIL PROTECTED]:/vobs/products/R> gdiff -ub src/unix/sys-std.c@@/main/3 src/unix/sys-std.c --- src/unix/sys-std.c@@/main/3 Thu Jan 12 11:39:55 2006 +++ src/unix/sys-std.c Fri Mar 31 23:12:16 2006 @@ -294,13 +294,13 @@ else onintr(); } -tv.tv_sec = 0; -tv.tv_usec = usec; +tv.tv_sec = usec/100; +tv.tv_usec = usec % 100; maxfd = setSelectMask(R_InputHandlers, &readMask); if (ignore_stdin) FD_CLR(fileno(stdin), &readMask); if (R_SelectEx(maxfd+1, &readMask, NULL, NULL, - (usec >= 0) ? &tv : NULL, intr)) + (usec >= 0) ? &tv : NULL, intr) > 0) return(&readMask); else return(NULL); __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Sys.sleep() burns up CPU on Solaris 8
That looks like a Solaris quirk. Normalization of the values is not required by the POSIX standard, nor by the other implementations we have tested. The Solaris 7 man page does mention the restriction though. We will change it for 2.3.0, thank you. On Fri, 31 Mar 2006, Stephen C. Pope wrote: > I noticed that R was burning up 100% of a CPU when a call to Sys.sleep() > was made. Upon investigation, I discovered that R_checkActivityEx() in > src/unix/sys-std.c was putting the entire timeout (in usec) into the > struct timeval tv_usec member, leaving tv_sec set to 0. > > I don't know about other unix variants, but Solaris requires that the > timeout value be normalized (i.e. 0 <= tv_usec < 100). Because a > value greater than 100 was being placed in tv_usec, select() was > returning an error (EINVAL) immediately, causing R_checkActivityEx() to > return immediately. This caused do_syssleep() to loop continuously until > the desired elapsed time had evolved. A rather sleepless sleep ;-). > > The following patch against R-2.2.1 fixes the problem. Note that the > test of the return value for R_SelectEx() was also incorrect; although > harmless, it was treating the error return (-1) as a successful return. > > stephen pope > [EMAIL PROTECTED] > > > Here's the patch. > > [EMAIL PROTECTED]:/vobs/products/R> gdiff -ub src/unix/sys-std.c@@/main/3 > src/unix/sys-std.c > --- src/unix/sys-std.c@@/main/3 Thu Jan 12 11:39:55 2006 > +++ src/unix/sys-std.c Fri Mar 31 23:12:16 2006 > @@ -294,13 +294,13 @@ > else onintr(); > } > > -tv.tv_sec = 0; > -tv.tv_usec = usec; > +tv.tv_sec = usec/100; > +tv.tv_usec = usec % 100; > maxfd = setSelectMask(R_InputHandlers, &readMask); > if (ignore_stdin) > FD_CLR(fileno(stdin), &readMask); > if (R_SelectEx(maxfd+1, &readMask, NULL, NULL, > - (usec >= 0) ? &tv : NULL, intr)) > + (usec >= 0) ? &tv : NULL, intr) > 0) > return(&readMask); > else > return(NULL); > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel