[Rd] protect/unprotect howto in C code
Hi, Im currently trying to debug a 'error in unprotect: stack imbalance' problem and I am curious about two basic questions on the use of PROTECT and UNPROTECT, which I could not figure out: - which objects have to be protected, namely, if the code is something like: SEXP fun, e; /* get the expression e ... */ fun = eval(e, R_GlobalEnv); /* or like this?: PROTECT(fun = eval(e, R_GlobalEnv)); */ PROTECT(fun = VECTOR_ELT(fun, 1)); /* do more things with fun ... */ does one need to protect the result of a call to 'eval' immediately? And how about R_tryEval? While searching for code examples in the sources, I found both protected evals and fewer non-protected. - Can someone give a hint (or some documents) on a way to simplify debugging such problem in addition to using gdb, please? I thought about temporarily defining macros such as #define DEBUG_Protect(x) PROTECT(x); fprintf(stderr, "Protecting in %s, l: %d\n", __FILE__, __LINE__) #define UNDEBUG_Protect(x) fprintf(stderr, "Unprotecting %d in %s, l:, %d \n", x , __FILE__, __LINE__); UNPROTECT(x); and then replace all calls temporarily in the package source. But there must be a better way... Thank you very much (and my appologies, if this sounds odd to more experineced c programmers ;) ) Michael __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] str() with attr(*, "names") is extremely slow for long vectors
> "MartinM" == Martin Maechler maechler at stat.math.ethz.ch >Sat, May 13 2006 15:16:19 +0200 writes: MartinM> But have you looked at R 2.3.0-patched at all? MartinM> MartinM> I did acknowledge that str() had become MartinM> unacceptably slow, and had implemented a simple patch MartinM> almost "immediately". > Yes, I did. Here are the timings (WinXP 1.8 GHz): > R 2.3.0 Patched (2006-05-11 r38037) > > 1. 44.09 0.09 44.45NANA > 2. 34.96 0.08 35.66NANA > 3. 34.52 0.07 34.81NANA When I made the test I used an incomplete version of R patched (the new version of the utils package was missing). With the complete version of R patch the timings are now the same as with R 2.2.0. Gerhard DI Gerhard Thallinger E-mail: [EMAIL PROTECTED] Institute for Genomics and Bioinformatics Web: http://genome.tugraz.at Graz University of Technology Tel:+43 316 873 5343 Petersgasse 14/VFax:+43 316 873 5340 8010 Graz, Austria Map: http://genome.tugraz.at/Loc.html __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R 2.3.1 scheduled for June 1
We plan to release R version 2.3.1 on June 1, in order to clean up a couple of embarrasments and platform-specific build issues 2.3.0. Beta releases will be available starting this Friday. -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] protect/unprotect howto in C code
On Wed, 17 May 2006, Michael Dondrup wrote: > Hi, > > Im currently trying to debug a 'error in unprotect: stack imbalance' problem > and I am curious about two basic questions on the use of PROTECT and > UNPROTECT, which I could not figure out: > > - which objects have to be protected, namely, if the code is something like: > > SEXP fun, e; > /* get the expression e ... */ > fun = eval(e, R_GlobalEnv); > /* or like this?: PROTECT(fun = eval(e, R_GlobalEnv)); */ > PROTECT(fun = VECTOR_ELT(fun, 1)); > /* do more things with fun ... */ > > does one need to protect the result of a call to 'eval' immediately? And how > about R_tryEval? > While searching for code examples in the sources, I found both protected evals > and fewer non-protected. The first rule is that any newly created R object needs to be protected before the garbage collector runs, and unprotected before exiting the function and after the last time the garbage collector runs. The second rule is that protection applies to the contents of a variable (the R object) not to the variable. The second rule is that protecting an object protects all its elements. In the example above fun = eval(e, R_GlobalEnv); may create a new object (it might just return a pointer to an existing function) and so probably needs to be protected. On the other hand fun = VECTOR_ELT(fun, 1); does not then need protecting. Since fun is protected, its second element is also protected. So PROTECT(fun = eval(e, R_GlobalEnv)); fun = VECTOR_ELT(fun, 1); /* do more stuff with fun */ UNPROTECT(1); If you don't know exactly which functions might return a new object or trigger the garbage collector it is probably safe to assume that anything might [this is the advice in 'Writing R Extensiosn']. Unless you are getting close to the limits of the pointer protection stack (eg in recursive algorithms), you might be safer writing code like PROTECT(fun = eval(e, R_GlobalEnv)); PROTECT(fun = VECTOR_ELT(fun, 1)); /* do more stuff with fun */ UNPROTECT(2); but I think it is useful to know that the vector accessors and mutators do not allocate memory. A stack imbalance is often due to different numbers of PROTECTs on different code paths. These are slightly annoying and become more frequent if you use more PROTECTs. On the other hand, R does detect them for you. If you don't use enough PROTECTs you get bugs that are very hard to track down [the best bet is probably valgrind + gctorture() to provoke them into showing themselves early, but that's only available on Linux]. -thomas __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] optim "CG" bug w/patch proposal (PR#8786)
> "Duncan" == Duncan Murdoch <[EMAIL PROTECTED]> > on Tue, 16 May 2006 08:34:06 -0400 writes: Duncan> On 5/16/2006 4:56 AM, [EMAIL PROTECTED] Duncan> wrote: >> Probably I included too much at once in my bug report. I >> can live with an unfulfilled wishlist and thank you for >> thinking about it. The "badly-behaved" function is just >> an example to demonstrate the bug I reported. I think it >> is a bug if optim returns (without any warning) an >> unmatching pair of par and value: f(par) != value. And it >> is easily fixed. >> Andreas Duncan> I agree with you that on return f(par) should be Duncan> value. I agree with Brian that changes to the Duncan> underlying strategy need much more thought. I agree (to both). However, isn't Andreas' patch just fixing the problem and not changing the underlying strategy at all? [No, I did not study the code in very much detail ...] Martin Maechler >> Prof Brian Ripley wrote: >> >>> [Sorry for the belated reply: this came in just as I was leaving for a >>> trip.] >>> >>> I've checked the original source, and the C code in optim does >>> accurately reflect the published algorithm. >>> >>> Since your example is a discontinuous function, I don't see why you >>> expect CG to work on it. John Nash reports on his extensive >>> experience that method 3 is the worst, and I don't think we should let >>> a single 2D example of a badly-behaved function override that. >>> >>> Note that no other optim method copes with the discontiuity here: had >>> your reported that it would have been clear that the problem was with >>> the example. >>> >>> On Fri, 21 Apr 2006, [EMAIL PROTECTED] wrote: >>> Dear R team, when using optim with method "CG" I got the wrong $value for the reported $par. Example: f<-function(p) { if (!all(p>-.7)) return(2) if (!all(p<.7)) return(2) sin((p[1])^2)*sin(p[2]) } optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) $par 19280.68 -10622.32 $value -0.2346207 # should be 2! optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) $par 3834.021 -2718.958 $value -0.0009983175 # should be 2! Fix: --- optim.c (Revision 37878) +++ optim.c (Arbeitskopie) @@ -970,7 +970,8 @@ if (!accpoint) { steplength *= stepredn; if (trace) Rprintf("*"); - } + } else + *Fmin = f; } } while (!(count == n || accpoint)); if (count < n) { After fix: optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) $par 0.6993467 -0.4900145 $value -0.2211150 optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) $par 3834.021 -2718.958 $value 2 Wishlist: >>> >> [wishlist deleted] >> >> Duncan> __ Duncan> R-devel@r-project.org mailing list Duncan> https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] protect/unprotect howto in C code
Thank you very much, Thomas! Thanks to the explanation, I think I could almost track down that bug. May I, just for clarification, ask a further bunch of questions (sorry). From what you say, did I get it right: - 'error in unprotect: stack imbalance' is only a warning, it will not cause termination, unless R is running as an embedded process (I'm working with RSPerl package in perl here)? - Forgetting to unprotect a value is harmless, and will only provoke these warnings? - If the protect/unprotect is unbalanced within a function call, R will give the warning/error already at the exit of this specific function? - If that is the case, what if I want to return a pointer to a value from a function? Do have to unprotect it anyway, before? btw: I'm working on FreeBSD, I found an experimental port of valgrind, too. Thank you very much again! Michael On Wednesday 17 May 2006 16:55 Thomas Lumley wrote: > On Wed, 17 May 2006, Michael Dondrup wrote: > > Hi, > > > > Im currently trying to debug a 'error in unprotect: stack imbalance' > > problem and I am curious about two basic questions on the use of PROTECT > > and UNPROTECT, which I could not figure out: > > > > - which objects have to be protected, namely, if the code is something > > like: > > > > SEXP fun, e; > > /* get the expression e ... */ > > fun = eval(e, R_GlobalEnv); > > /* or like this?: PROTECT(fun = eval(e, R_GlobalEnv)); */ > > PROTECT(fun = VECTOR_ELT(fun, 1)); > > /* do more things with fun ... */ > > > > does one need to protect the result of a call to 'eval' immediately? And > > how about R_tryEval? > > While searching for code examples in the sources, I found both protected > > evals and fewer non-protected. > > The first rule is that any newly created R object needs to be protected > before the garbage collector runs, and unprotected before exiting the > function and after the last time the garbage collector runs. > > The second rule is that protection applies to the contents of a variable > (the R object) not to the variable. > > The second rule is that protecting an object protects all its elements. > > In the example above > fun = eval(e, R_GlobalEnv); > may create a new object (it might just return a pointer to an existing > function) and so probably needs to be protected. > > On the other hand > fun = VECTOR_ELT(fun, 1); > does not then need protecting. Since fun is protected, its second element > is also protected. > > So > PROTECT(fun = eval(e, R_GlobalEnv)); > fun = VECTOR_ELT(fun, 1); > /* do more stuff with fun */ > UNPROTECT(1); > > If you don't know exactly which functions might return a new object or > trigger the garbage collector it is probably safe to assume that anything > might [this is the advice in 'Writing R Extensiosn']. Unless you are > getting close to the limits of the pointer protection stack (eg in > recursive algorithms), you might be safer writing code like > PROTECT(fun = eval(e, R_GlobalEnv)); > PROTECT(fun = VECTOR_ELT(fun, 1)); > /* do more stuff with fun */ > UNPROTECT(2); > but I think it is useful to know that the vector accessors and mutators do > not allocate memory. > > > A stack imbalance is often due to different numbers of PROTECTs on > different code paths. These are slightly annoying and become more frequent > if you use more PROTECTs. On the other hand, R does detect them for you. > If you don't use enough PROTECTs you get bugs that are very hard to track > down [the best bet is probably valgrind + gctorture() to provoke them into > showing themselves early, but that's only available on Linux]. > > -thomas __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD SHLIB
On Wed, 17 May 2006, Martin Maechler wrote: > > "TL" == Thomas Lumley <[EMAIL PROTECTED]> > > on Tue, 16 May 2006 10:15:11 -0700 (PDT) writes: > > TL> On Tue, 16 May 2006, Prof Brian Ripley wrote: > >> It is possible to do things like > >> > >> env PKG_LIB="-L/opt/foo/lib -lbar" R CMD SHLIB *.c > >> > >> to add libraries to the creation of a shared object, but > >> I have from time to time wondered if we should allow > >> > >> R CMD SHLIB *.c -L/opt/foo/lib -lbar > >> > >> not least as users seems to expect it to work. It looks > >> simple to do (at least under Unix) if we pass -L* -l* *.a > >> directly to the link command. > >> > >> Would this be worthwhile? > > TL> Yes. > > TL> My only reservation is that users may then expect all > TL> compiler/linker flags to work, not just -L/-l > > I had exactly the same thought. > > Maybe Brian's proposal can be extended into > > "all switches that are not recognized by 'R CMD SHLIB' are >passed to compiler / linker " > > hmm, or maybe not, since the question quickly become *which* are > passed to compiler and which to linker (and which to both ?) ... I'd rather have SHLIB complain if it sees a -flag that SHLIB doesn't recognize. Otherwise we get portability problems. E.g., when using the Microsoft C compiler and linker a SHLIB that knows about the -l and -L flags can translate -lfoo -L/dir/subdir into LDFLAGS that link.exe knows about: foo.lib /libpath:\dir\subdir If you need other linker flags, could they be in compiler/platform- specific Makevars-? I think other ones are not very common. Bill Dunlap Insightful Corporation bill at insightful dot com 360-428-8146 "All statements in this message represent the opinions of the author and do not necessarily reflect Insightful Corporation policy or position." __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] optim "CG" bug w/patch proposal (PR#8786)
On Wed, 17 May 2006, [EMAIL PROTECTED] wrote: > >> "Duncan" == Duncan Murdoch <[EMAIL PROTECTED]> >> on Tue, 16 May 2006 08:34:06 -0400 writes: > >Duncan> On 5/16/2006 4:56 AM, [EMAIL PROTECTED] >Duncan> wrote: >>> Probably I included too much at once in my bug report. I >>> can live with an unfulfilled wishlist and thank you for >>> thinking about it. The "badly-behaved" function is just >>> an example to demonstrate the bug I reported. I think it >>> is a bug if optim returns (without any warning) an >>> unmatching pair of par and value: f(par) != value. And it >>> is easily fixed. > >>> Andreas > >Duncan> I agree with you that on return f(par) should be >Duncan> value. I agree with Brian that changes to the >Duncan> underlying strategy need much more thought. > > I agree (to both). > However, isn't Andreas' patch just fixing the problem > and not changing the underlying strategy at all? > [No, I did not study the code in very much detail ...] The (minor) issue is that x is updated but not f(x). I think the intended stategy was to update neither, so Andreas' patch was a change of stategy. In particular, a question is if this should be marked as a convergence failure. But people really need to read the reference before commenting, and I at least need to find the time to do so in more detail. > Martin Maechler > >>> Prof Brian Ripley wrote: >>> >>>> [Sorry for the belated reply: this came in just as I was leaving for a >>>> trip.] >>>> >>>> I've checked the original source, and the C code in optim does >>>> accurately reflect the published algorithm. >>>> >>>> Since your example is a discontinuous function, I don't see why you >>>> expect CG to work on it. John Nash reports on his extensive >>>> experience that method 3 is the worst, and I don't think we should let >>>> a single 2D example of a badly-behaved function override that. >>>> >>>> Note that no other optim method copes with the discontiuity here: had >>>> your reported that it would have been clear that the problem was with >>>> the example. >>>> >>>> On Fri, 21 Apr 2006, [EMAIL PROTECTED] wrote: >>>> > Dear R team, > > when using optim with method "CG" I got the wrong $value for the > reported $par. > > Example: > f<-function(p) { > if (!all(p>-.7)) return(2) > if (!all(p<.7)) return(2) > sin((p[1])^2)*sin(p[2]) > } > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) > $par 19280.68 -10622.32 > $value -0.2346207 # should be 2! > > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) > $par 3834.021 -2718.958 > $value -0.0009983175 # should be 2! > > Fix: > --- optim.c (Revision 37878) > +++ optim.c (Arbeitskopie) > @@ -970,7 +970,8 @@ > if (!accpoint) { > steplength *= stepredn; > if (trace) Rprintf("*"); > - } > + } else > + *Fmin = f; > } > } while (!(count == n || accpoint)); > if (count < n) { > > After fix: > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) > $par 0.6993467 -0.4900145 > $value -0.2211150 > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) > $par 3834.021 -2718.958 > $value 2 > > Wishlist: >>>> >>> [wishlist deleted] >>> >>> > >Duncan> __ >Duncan> R-devel@r-project.org mailing list >Duncan> https://stat.ethz.ch/mailman/listinfo/r-devel > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] protect/unprotect howto in C code
On Wed, 17 May 2006, Michael Dondrup wrote: > Thank you very much, Thomas! > > Thanks to the explanation, I think I could almost track down that bug. May I, > just for clarification, ask a further bunch of questions (sorry). From what > you say, did I get it right: > > - 'error in unprotect: stack imbalance' is only a warning, it will not cause > termination, unless R is running as an embedded process (I'm working with > RSPerl package in perl here)? Correct. > - Forgetting to unprotect a value is harmless, and will only provoke these > warnings? Well, it causes a memory leak, and in the unlikely event that you have finalizers set on the objects the finalizers won't run. Otherwise, yes. > - If the protect/unprotect is unbalanced within a function call, R will give > the warning/error already at the exit of this specific function? Not quite. The warning comes on return from .Call(). If the function you .Call calls other C functions you will still only get the warning on return to R. > - If that is the case, what if I want to return a pointer to a value from a > function? Do have to unprotect it anyway, before? Rule 1 applies to the code that calls your function, too. If you return (a pointer to) an object that from the point of view of the calling function is newly created, the calling function has to PROTECT it. In particular, the return value of .Call will be protected if you store it in a variable. -thomas __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD SHLIB
On Wed, 17 May 2006, Bill Dunlap wrote: > On Wed, 17 May 2006, Martin Maechler wrote: > >>> "TL" == Thomas Lumley <[EMAIL PROTECTED]> >>> on Tue, 16 May 2006 10:15:11 -0700 (PDT) writes: >> >> TL> On Tue, 16 May 2006, Prof Brian Ripley wrote: >>>> It is possible to do things like >>>> >>>> env PKG_LIB="-L/opt/foo/lib -lbar" R CMD SHLIB *.c >>>> >>>> to add libraries to the creation of a shared object, but >>>> I have from time to time wondered if we should allow >>>> >>>> R CMD SHLIB *.c -L/opt/foo/lib -lbar >>>> >>>> not least as users seems to expect it to work. It looks >>>> simple to do (at least under Unix) if we pass -L* -l* *.a >>>> directly to the link command. >>>> >>>> Would this be worthwhile? >> >> TL> Yes. >> >> TL> My only reservation is that users may then expect all >> TL> compiler/linker flags to work, not just -L/-l >> >> I had exactly the same thought. >> >> Maybe Brian's proposal can be extended into >> >> "all switches that are not recognized by 'R CMD SHLIB' are >>passed to compiler / linker " >> >> hmm, or maybe not, since the question quickly become *which* are >> passed to compiler and which to linker (and which to both ?) ... > > I'd rather have SHLIB complain if it sees a -flag > that SHLIB doesn't recognize. Otherwise we get > portability problems. E.g., when using the Microsoft > C compiler and linker a SHLIB that knows about the > -l and -L flags can translate >-lfoo -L/dir/subdir > into LDFLAGS that link.exe knows about: >foo.lib /libpath:\dir\subdir > > If you need other linker flags, could they be in compiler/platform- > specific Makevars-? I think other > ones are not very common. In R, we are not dealing with such compilers/linkers (fortunately). Given that there are plenty of ways to hang oneself here, I do not want to be unduly restrictive as to what one can pass. (All the autoconf etc mechanisms assume flags like -L and -l, and also that some reordering is done by the frontends.) > > Bill Dunlap > Insightful Corporation > bill at insightful dot com > 360-428-8146 > > "All statements in this message represent the opinions of the author and do > not necessarily reflect Insightful Corporation policy or position." > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] optim "CG" bug w/patch proposal (PR#8786)
On 5/17/2006 11:07 AM, Martin Maechler wrote: >> "Duncan" == Duncan Murdoch <[EMAIL PROTECTED]> >> on Tue, 16 May 2006 08:34:06 -0400 writes: > > Duncan> On 5/16/2006 4:56 AM, [EMAIL PROTECTED] > Duncan> wrote: > >> Probably I included too much at once in my bug report. I > >> can live with an unfulfilled wishlist and thank you for > >> thinking about it. The "badly-behaved" function is just > >> an example to demonstrate the bug I reported. I think it > >> is a bug if optim returns (without any warning) an > >> unmatching pair of par and value: f(par) != value. And it > >> is easily fixed. > > >> Andreas > > Duncan> I agree with you that on return f(par) should be > Duncan> value. I agree with Brian that changes to the > Duncan> underlying strategy need much more thought. > > I agree (to both). > However, isn't Andreas' patch just fixing the problem > and not changing the underlying strategy at all? > [No, I did not study the code in very much detail ...] Brian and I only quoted part of his message. The patch we quoted isn't bad, but I'm not sure it's the best: in particular, with the patch optim() returns a function value that is larger than f at the starting value (see below). I think this means it would be better to change optim$par rather than changing optim$value to achieve consistency, but a quick look at optim.c made me think it would take more time than I had to do this without messing up something else. I don't think I'll have a chance to look at this before 2.3.1, so if nobody else takes it on, I'd prefer to leave this as an unresolved bug report for now. > > Martin Maechler > > >> Prof Brian Ripley wrote: > >> > >>> [Sorry for the belated reply: this came in just as I was leaving for a > >>> trip.] > >>> > >>> I've checked the original source, and the C code in optim does > >>> accurately reflect the published algorithm. > >>> > >>> Since your example is a discontinuous function, I don't see why you > >>> expect CG to work on it. John Nash reports on his extensive > >>> experience that method 3 is the worst, and I don't think we should let > >>> a single 2D example of a badly-behaved function override that. > >>> > >>> Note that no other optim method copes with the discontiuity here: had > >>> your reported that it would have been clear that the problem was with > >>> the example. > >>> > >>> On Fri, 21 Apr 2006, [EMAIL PROTECTED] wrote: > >>> > Dear R team, > > when using optim with method "CG" I got the wrong $value for the > reported $par. > > Example: > f<-function(p) { > if (!all(p>-.7)) return(2) > if (!all(p<.7)) return(2) > sin((p[1])^2)*sin(p[2]) > } > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) > $par 19280.68 -10622.32 > $value -0.2346207 # should be 2! > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) > $par 3834.021 -2718.958 > $value -0.0009983175 # should be 2! I think this is f(0.1, -0.1), so really $par should be 0.1, -0.1 in this case. In the one above, it appears to have made a little progress before it went off track, but -0.234 is better than 2, so it should be returned if it's really an f(p) value. Duncan Murdoch > > Fix: > --- optim.c (Revision 37878) > +++ optim.c (Arbeitskopie) > @@ -970,7 +970,8 @@ > if (!accpoint) { > steplength *= stepredn; > if (trace) Rprintf("*"); > - } > + } else > + *Fmin = f; > } > } while (!(count == n || accpoint)); > if (count < n) { > > After fix: > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=1)) > $par 0.6993467 -0.4900145 > $value -0.2211150 > optim(c(0.1,-0.1),f,method="CG",control=list(trace=0,type=2)) > $par 3834.021 -2718.958 > $value 2 > > Wishlist: > >>> > >> [wishlist deleted] > >> > >> > > Duncan> __ > Duncan> R-devel@r-project.org mailing list > Duncan> https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Non-ASCII chars in R code
The report on R_help about problems loading package irr (in a UTF-8 locale, it seemed) prompted me to look a little deeper. There are quite a few packages with Latin-1 chars in their .R files, and a couple in UTF-8. Apart from non-ASCII chars in comments, this is a problem as the code concerned cannot be represented in some locales R runs in (for example Japanese on Windows). It happens that irr is so small that lazy-loading is not used, but when lazy-loading or a saved image is used, the locale in use when the package is installed determines how the code is parsed (and may not be the same as when the package is used, and indeed it is not uncommon on Linux/Unix systems for different users to use different locales). This means that using non-ASCII chars is not portable, and I've added code to R CMD check in R-devel to warn about such usage. In the examples I have investigated the usages have been - messages in a non-English language, typically French. - startup messages with people's names. - use of characters that I can only guess are intended to be in the WinAnsi encoding, e.g. a copyright symbol. The only reason I have not made this an error is that people might want to produce packages for a known locale, e.g. a student class, but perhaps it should be an error for packages submitted to CRAN. I do not believe there is much we can do about this: messages which are not entirely in ASCII cannot be displayed on many R platforms and it seems incorrect to allow French messages and not Japanese ones. The packages currently throwing warnings are FactoMineR FunCluster JointGLM LoopAnalyst Sciviews ade4 adehabitat ape climatol crossdes deal grasper irr lsa mvrpart pastecs sn surveillance truncgof -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] prcomp: problem with zeros? (PR#8870)
Full_Name: Juha Heljoranta Version: R 2.1.1 (2005-06-20) OS: Gentoo Linux Submission from: (NULL) (88.112.29.250) prcomp has a bug which causes following error Error in svd(x, nu = 0) : infinite or missing values in 'x' on a valid data set (no Infs, no missing values). The error is most likely caused by the zeros in data. My code and temporary workaround: m = matrix(... ... prcomp(m, center = TRUE, scale = TRUE) Error in svd(x, nu = 0) : infinite or missing values in 'x' m = matrix(... ... # ugly work around m = m + 1e-120 # too small values will not work # m = m + 1e-150 prcomp(m, center = TRUE, scale = TRUE) # success The matrix in question is ~1024x13000 containing double values, thus totaling of ~103M of raw data. I can put it online if needed. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] prcomp: problem with zeros? (PR#8870)
On Wed, 17 May 2006, [EMAIL PROTECTED] wrote: > Full_Name: Juha Heljoranta > Version: R 2.1.1 (2005-06-20) Not a current version of R. > OS: Gentoo Linux > Submission from: (NULL) (88.112.29.250) > > prcomp has a bug which causes following error > >Error in svd(x, nu = 0) : infinite or missing values in 'x' > > on a valid data set (no Infs, no missing values). The error is most likely > caused by the zeros in data. Why do you say that? Without a reproducible example, we cannot judge what is going on. If you called prcomp with scale=TRUE on a matrix that has a completely zero (or constant) column, then this is a reasonable error message. > My code and temporary workaround: > > > m = matrix(... > ... > prcomp(m, center = TRUE, scale = TRUE) > Error in svd(x, nu = 0) : infinite or missing values in 'x' > > > m = matrix(... > ... > # ugly work around > m = m + 1e-120 > # too small values will not work > # m = m + 1e-150 > prcomp(m, center = TRUE, scale = TRUE) > # success > > > The matrix in question is ~1024x13000 containing double values, thus totaling > of > ~103M of raw data. I can put it online if needed. > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] prcomp: problem with zeros? (PR#8870)
On 17 May 2006, at 22:02, [EMAIL PROTECTED] wrote: > On Wed, 17 May 2006, [EMAIL PROTECTED] wrote: >> >> prcomp has a bug which causes following error >> >>Error in svd(x, nu = 0) : infinite or missing values in 'x' >> >> on a valid data set (no Infs, no missing values). The error is most >> likely >> caused by the zeros in data. > > Why do you say that? Without a reproducible example, we cannot judge > what > is going on. If you called prcomp with scale=TRUE on a matrix that > has a > completely zero (or constant) column, then this is a reasonable error > message. Constant columns (which is a likely reason here) indeed become NaN after scale(), but the error message was: Error in svd(x, nu = 0) : infinite or missing values in 'x' and calling this 'reasonable' is stretching the limits of reason. However, in general this is "easy" to solve: scale() before the analysis and replace NaN with 0 (prcomp handles zeros). For instance, x <- scale(x) x[is.nan(x)] <- 0 prcomp(x) (and a friendly prcomp() would do this internally.) cheers, jari oksanen -- Jari Oksanen, Oulu, Finland __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Documentation for taper in spec.taper (PR#8871)
Full_Name: Michael Stein Version: Version 2.1.1 OS: linux Submission from: (NULL) (128.135.149.112) The documentation for spec.taper says p: The total proportion to be tapered, either a scalar or a vector of the length of the number of series. Details: The cosine-bell taper is applied to the first and last 'p[i]/2' observations of time series 'x[, i]'. However, the program actually applies the taper to the first and last p[i] observations, so 2 * p is the total proportion to be tapered. The documentation for spec.pgram says taper: proportion of data to taper. A split cosine bell taper is applied to this proportion of the data at the beginning and end of the series. The second statement is correct, but this means that the proportion of the data that is tapered is 2 * taper, not taper. This documentation could easily lead to users getting twice as much tapering as they think they are getting. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Convention difference in tseries.maxdrawdown (PR#8872)
Full_Name: Brian K. Boonstra Version: 2.2.1 OS: WinXP, OSX Submission from: (NULL) (63.172.178.137) The maxdrawdown function in tseries defines the maximum drawdown in terms of absolute dollars (or whatever units the input is in). Industry convention is to do this in percentage terms. I have written the code below as maximumdrawdown(), which retains backward compatibility with the current version. It has the flaw that it does not check for zero or negative values. maximumdrawdown <- function (x) { if (NCOL(x) > 1) stop("x is not a vector or univariate time series") if (any(is.na(x))) stop("NAs in x") cminx <- x/cummax(x) mdd <- min(cminx) to <- which(mdd == cminx) from <- double(NROW(to)) for (i in 1:NROW(to)) { from[i] <- max( which(cminx[1:to[i]] == 1) ) } return(list(maximumdrawdown = 1-mdd, maxdrawdown = (1-mdd)*x[from], from = from, to = to)) } __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Convention difference in tseries.maxdrawdown (PR#8872)
Regarding the upwardly compatible comment, the dollar drawdown that corresponds to the maximum fractional drawdown is not necessarily the maximum dollar drawdown. For example, in this situation the maximum fractional drawdown is from 100 to 75 but the maximum dollar drawdown is from 200 to 160. > x <- c(1, 100, 75, 200, 160) > maximumdrawdown(x) # function defined in post $maximumdrawdown [1] 0.25 $maxdrawdown [1] 25 $from [1] 2 $to [1] 3 > maxdrawdown(x) # function from tseries $maxdrawdown [1] 40 $from [1] 4 $to [1] 5 On 5/17/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Full_Name: Brian K. Boonstra > Version: 2.2.1 > OS: WinXP, OSX > Submission from: (NULL) (63.172.178.137) > > > The maxdrawdown function in tseries defines the maximum drawdown in terms of > absolute dollars (or whatever units the input is in). Industry convention is > to > do this in percentage terms. I have written the code below as > maximumdrawdown(), which retains backward compatibility with the current > version. It has the flaw that it does not check for zero or negative values. > > maximumdrawdown <- function (x) > { >if (NCOL(x) > 1) >stop("x is not a vector or univariate time series") >if (any(is.na(x))) >stop("NAs in x") >cminx <- x/cummax(x) >mdd <- min(cminx) >to <- which(mdd == cminx) >from <- double(NROW(to)) >for (i in 1:NROW(to)) { > from[i] <- max( which(cminx[1:to[i]] == 1) ) > } >return(list(maximumdrawdown = 1-mdd, maxdrawdown = (1-mdd)*x[from], from = > from, to = to)) > } > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] 'R CMD config' doesn't work on Windows
Hi everybody, I'd like to report this problem I have on Windows with R 2.2.1, R 2.3.0, R-2.3.0 patched (r38086) and R 2.4.0 devel (r37925): D:\hpages>R\bin\R CMD config CC Can't open perl script "D:\hpages\R-2.3.1/bin/config": No such file or directory Best, H. -- Hervé Pagès E-mail: [EMAIL PROTECTED] Phone: (206) 667-5791 Fax: (206) 667-1319 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] 'R CMD ' doesn't work on Windows
Hi, Something else I'd like to report. If a segmentation fault occurs during the "creating vignettes" step, then 'R CMD build' ignores the problem and end up building the source package anyway: /loc/biocbuild/1.9d/R/bin/R CMD build RMAGEML * checking for file 'RMAGEML/DESCRIPTION' ... OK * preparing 'RMAGEML': ... * DONE (RMAGEML) * creating vignettes ...sh: line 1: 8070 Segmentation fault '/loc/biocbuild/1.9d/R/bin/R' --vanilla --no-save --quiet /tmp/Rout656233925 2>&1 OK * cleaning src * removing junk files * checking for LF line-endings in source files * checking for empty or unneeded directories * building 'RMAGEML_2.7.0.tar.gz' 'R CMD check' behaves the same way during the "checking package vignettes" step. I have observed this problem with R 2.3.0 and R 2.4.0 devel (r37925). Best, H. -- Hervé Pagès E-mail: [EMAIL PROTECTED] Phone: (206) 667-5791 Fax: (206) 667-1319 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] 'R CMD build' ignoring segfaults occuring during the vignettes creation
Sorry for erroneous subject of my previous post. Here it goes again. Hi, Something else I'd like to report. If a segmentation fault occurs during the "creating vignettes" step, then 'R CMD build' ignores the problem and end up building the source package anyway: /loc/biocbuild/1.9d/R/bin/R CMD build RMAGEML * checking for file 'RMAGEML/DESCRIPTION' ... OK * preparing 'RMAGEML': ... * DONE (RMAGEML) * creating vignettes ...sh: line 1: 8070 Segmentation fault '/loc/biocbuild/1.9d/R/bin/R' --vanilla --no-save --quiet /tmp/Rout656233925 2>&1 OK * cleaning src * removing junk files * checking for LF line-endings in source files * checking for empty or unneeded directories * building 'RMAGEML_2.7.0.tar.gz' 'R CMD check' behaves the same way during the "checking package vignettes" step. I have observed this problem with R 2.3.0 and R 2.4.0 devel (r37925). Best, H. -- Hervé Pagès E-mail: [EMAIL PROTECTED] Phone: (206) 667-5791 Fax: (206) 667-1319 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 'R CMD config' doesn't work on Windows
Herve Pages <[EMAIL PROTECTED]> writes: > I'd like to report this problem I have on Windows with R 2.2.1, R 2.3.0, > R-2.3.0 patched (r38086) and R 2.4.0 devel (r37925): > D:\hpages>R\bin\R CMD config CC > Can't open perl script "D:\hpages\R-2.3.1/bin/config": No such file > or directory A bit of background... For Bioconductor's automated builds, we are trying to provide developers with more information about the build platforms upon which their packages are tested. For example, see here: http://www.bioconductor.org/checkResults/1.9/gopher5-NodeInfo.html So R CMD config provides a lot of useful details, but it doesn't seem to work on Windows for us. It would be very nice to be able to provide the same level of detail about our Windows build system. Cheers, + seth __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] install.packages bug (PR#8873)
Hello, I've been using R for about 3 years now and I'm pretty sure this is a bug. I'm using R 2.2.0. The way R is set up to get packages from CRAN using install.packages is really convenient --- if you are installing to your system's main package directory. However, I observe the following problem: I want package X but it requires package Y. Further, I have neither package right now. And, I want to install both of them to my homedir (say at ~/mylib) rather than the main R package directory. What I would want to do then is: > install.packages('X',lib='~/mylib',dependencies=TRUE) However, this doesn't work. It does notice that X depends on Y and so it downloads Y first, but it downloads Y to the wrong directory!! It should download both to ~/mylib in my opinion!!! Sincerely, Toby Dylan Hocking http://www.ocf.berkeley.edu/~tdhock __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] ?hist and $density explanation
Hi, people. Within ?hist (using R 2.3.0), one reads: density: values f^(x[i]), as estimated density values. If 'all(diff(breaks) == 1)', they are the relative frequencies 'counts/n' and in general satisfy sum[i; f^(x[i]) (b[i+1]-b[i])] = 1, where b[i] = 'breaks[i]'. I trip on this explanation each time I read it. Some R guardians will be tempted to say that since R itself does not trip, I am necessarily the problem :-). But yet, non-obstant and nevertheless, maybe these few lines of documentation could be improved. The "f^(x[i])" bit is somehow cryptic and not explained. It suggests that there are as many densities as possible "i" values, and since "i" indexes "x", it indirectly suggests that length(density) == length(x), which cannot be right. The "sum[i; ...]" has to be taken up to the number of cells, not the number of "x" values. Because "x[i]" is a bit meaningless in the above context, it should better be avoided. The "^" may mean that "x[i]" is an index of "f", some kind of TeX device for shifting the notation. It may also means "hat" to suggest the density is an approximation. But the approximation of what? Of course, I understand an untold model by which "density" estimates the density of some continuous distribution out of which the "x" values were sampled, before the "hist()" function was called. But "x" is not necessarily a sample of a continuum, it may well be the population, and the densities in the histogram may well be exact, and not an approximation. So it might be simpler to drop the "^" as well. The concept of relative frequency is explained in case of equal width cells only, and not otherwise. This concept is not reused elsewhere in "?hist". So, it is not so useful, we could use "d" instead of "f". Finally, writing "breaks[i+1]-breaks[i]" is simpler and clearer than introducing an intermediate "b[i]" device. Let's drop it. Let me suggest a simpler rewriting of these few lines, using humbler notation while being more precise. Let's start with something like: density: For each cell i, density[i] is the proportion of all x[] which get sorted into that cell, divided by the cell width. So, the value of 'sum(density * diff(breaks))' is 1. and improve on it. -- François Pinard http://pinard.progiciels-bpi.ca __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Noncentral dt() with tiny 'x' values (PR#8874)
Full_Name: Mike Meredith Version: 2.3.0 OS: WinXP SP2 Submission from: (NULL) (210.195.228.29) Using dt() with a non-centrality parameter and near-zero values for 'x' results in erratic output. Try this: tst <- c(1e-12, 1e-13, 1e-14, 1e-15, 1e-16, 1e-17, 0) dt(tst,16,1) I get: 0.2381019 0.2385462 0.2296557 0.1851817 0.6288373 3.8163916 (!!) 0.2382217 The 0.238 values are okay, the others nonsense, and they cause confusing spikes on plots of dt() vs 'x' if 'x' happens to include tiny values. (Other values of df and ncp also malfunction, but not all give results out by an order of magnitude!) I'm using the work-around dt(round(x,10),...), but dt() should really take care of this itself. Regards, Mike. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 'R CMD config' doesn't work on Windows
On 5/17/2006 5:51 PM, Seth Falcon wrote: > Herve Pages <[EMAIL PROTECTED]> writes: >> I'd like to report this problem I have on Windows with R 2.2.1, R 2.3.0, >> R-2.3.0 patched (r38086) and R 2.4.0 devel (r37925): >> D:\hpages>R\bin\R CMD config CC >> Can't open perl script "D:\hpages\R-2.3.1/bin/config": No such file >> or directory > > A bit of background... > > For Bioconductor's automated builds, we are trying to provide > developers with more information about the build platforms upon which > their packages are tested. For example, see here: > http://www.bioconductor.org/checkResults/1.9/gopher5-NodeInfo.html > > So R CMD config provides a lot of useful details, but it doesn't > seem to work on Windows for us. It would be very nice to be able to > provide the same level of detail about our Windows build system. What is "config"? On Windows, it's assumed to be a Perl script stored in RHOME/bin and as the message says, there's no such thing. If it's a .exe or .bat file, you need to specify the extension for R CMD to find it. This is described in the Intro to R manual, in "invoking R from the command line". Duncan Murdoch __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] install.packages bug (PR#8873)
On 5/17/2006 5:45 PM, [EMAIL PROTECTED] wrote: > Hello, > > I've been using R for about 3 years now and I'm pretty sure this is a bug. > I'm using R 2.2.0. The latest release is 2.3.0 and 2.3.1 has been announced. Please try R-patched (which will become 2.3.1 in a couple of weeks), and see if you still have this problem. Duncan Murdoch > > The way R is set up to get packages from CRAN using install.packages is > really convenient --- if you are installing to your system's main package > directory. However, I observe the following problem: > > I want package X but it requires package Y. Further, I have neither > package right now. And, I want to install both of them to my homedir (say > at ~/mylib) rather than the main R package directory. What I would want to > do then is: > >> install.packages('X',lib='~/mylib',dependencies=TRUE) > > However, this doesn't work. It does notice that X depends on Y and so it > downloads Y first, but it downloads Y to the wrong directory!! It should > download both to ~/mylib in my opinion!!! > > Sincerely, > Toby Dylan Hocking > http://www.ocf.berkeley.edu/~tdhock > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 'R CMD config' doesn't work on Windows
Duncan Murdoch <[EMAIL PROTECTED]> writes: > What is "config"? Usage: R CMD config [options] [VAR] Get the value of a basic R configure variable VAR which must be among those listed in the 'Variables' section below, or the header and library flags necessary for linking against R. > On Windows, it's assumed to be a Perl script stored in RHOME/bin and > as the message says, there's no such thing. I guess this is a feature request then :-( Any Perl experts keen on translating the shell script that does R CMD config? I assume that is more or less what would be required. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] prcomp: problem with zeros? (PR#8870)
Prof Brian Ripley wrote: >>Error in svd(x, nu = 0) : infinite or missing values in 'x' > > Why do you say that? Without a reproducible example, we cannot judge > what is going on. If you called prcomp with scale=TRUE on a matrix that > has a completely zero (or constant) column, then this is a reasonable > error message. My bad, the matrix has actually a zero column. Thank you for your time and sorry for any inconveniences that this may have caused. Regards, Juha Heljoranta __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] 'R CMD config' doesn't work on Windows
On Wed, 17 May 2006, Seth Falcon wrote: > Duncan Murdoch <[EMAIL PROTECTED]> writes: >> What is "config"? > >Usage: R CMD config [options] [VAR] > >Get the value of a basic R configure variable VAR which must be >among those listed in the 'Variables' section below, or the header >and library flags necessary for linking against R. > >> On Windows, it's assumed to be a Perl script stored in RHOME/bin and >> as the message says, there's no such thing. > > I guess this is a feature request then :-( > Any Perl experts keen on translating the shell script that does R CMD > config? I assume that is more or less what would be required. You also need to collect the information. Much of it is only relevant to building R on a Unix-alike: as the comment says ## ## The variables are basically the precious configure variables (with ## the R_* and MAIN_* ones removed), plus FLIBS and BLAS_LIBS. and they are extracted from ${R_HOME}/etc${R_ARCH}/Makeconf which does not exist on Windows. I think it is up to those who feel this is needed to contribute and maintin the code. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Documentation for taper in spec.taper (PR#8871)
Thanks, reworded in R-patched (and later). On Wed, 17 May 2006, [EMAIL PROTECTED] wrote: > Full_Name: Michael Stein > Version: Version 2.1.1 > OS: linux > Submission from: (NULL) (128.135.149.112) > > > The documentation for spec.taper says > > p: The total proportion to be tapered, either a scalar or a > vector of the length of the number of series. > > Details: > > The cosine-bell taper is applied to the first and last 'p[i]/2' > observations of time series 'x[, i]'. > > > However, the program actually applies the taper to the first and last p[i] > observations, so 2 * p is the total proportion to be tapered. > > The documentation for spec.pgram says > > taper: proportion of data to taper. A split cosine bell taper is > applied to this proportion of the data at the beginning and > end of the series. > > The second statement is correct, but this means that the proportion of the > data > > that is tapered is 2 * taper, not taper. This documentation could easily lead > to > users getting twice as much tapering as they think they are getting. > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > > -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel