[Rd] Compile R to WebAssembly / Emscripten?

2019-02-20 Thread Todd Wilder
Has anyone attempted to compile R (probably without any OS bindings) to
WebAssembly / Emscripten? If so, how far did you get? (would be crazy
awesome if you could get all the way to a ggplot bitmap output). If not, is
this a waste of time or is there some daylight to doing this?

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Compile R to WebAssembly / Emscripten?

2019-02-20 Thread Gábor Csárdi
This was some time ago:
https://stat.ethz.ch/pipermail/r-devel/2013-May/066724.html

So probably not hopeless, but I would think it is a lot of work.

Gabor

On Wed, Feb 20, 2019 at 8:17 AM Todd Wilder  wrote:
>
> Has anyone attempted to compile R (probably without any OS bindings) to
> WebAssembly / Emscripten? If so, how far did you get? (would be crazy
> awesome if you could get all the way to a ggplot bitmap output). If not, is
> this a waste of time or is there some daylight to doing this?
>
> [[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] code for sum function

2019-02-20 Thread Berend Hasselman
> 
> On 2019-02-19 2:08 p.m., William Dunlap via R-devel wrote:
>> The algorithm does make a differece.  You can use Kahan's summation
>> algorithm (https://en.wikipedia.org/wiki/Kahan_summation_algorithm) to
>> reduce the error compared to the naive summation algorithm.  E.g., in R
>> code:
>> 
>> naiveSum <-
>> function(x) {
>>   s <- 0.0
>>   for(xi in x) s <- s + xi
>>   s
>> }
>> kahanSum <- function (x)
>> {
>>   s <- 0.0
>>   c <- 0.0 # running compensation for lost low-order bits
>>   for(xi in x) {
>>  y <- xi - c
>>  t <- s + y # low-order bits of y may be lost here
>>  c <- (t - s) - y
>>  s <- t
>>   }
>>   s
>> }
>> 
>>> rSum <- vapply(c(1:20,10^(2:7)), function(n) sum(rep(1/7,n)), 0)
>>> rNaiveSum <- vapply(c(1:20,10^(2:7)), function(n) naiveSum(rep(1/7,n)), 0)
>>> rKahanSum <- vapply(c(1:20,10^(2:7)), function(n) kahanSum(rep(1/7,n)), 0)
>>> 
>>> table(rSum == rNaiveSum)
>> 
>> FALSE  TRUE
>>   21 5
>>> table(rSum == rKahanSum)
>> 
>> FALSE  TRUE
>>323


If you use the vector  c(1,10^100,1,-10^100) as input then
sum, naiveSum or kahanSum will all give an incorrect answer.
All return 0 instead of 2.

>From the wikipedia page we can try the pseudocode given of the modification by 
>Neumaier.
My R version (with a small correction to avoid cancellation?) is

neumaierSum <- function (x)
{
  s <- 0.0
  z <- 0.0 # running compensation for lost low-order bits
  for(xi in x) {
 t <- s + xi
 if( abs(s) >= abs(xi) ){
 b <- (s-t)+xi #  intermediate step needed  in R otherwise cancellation
 z <- z+b  # If sum is bigger, low-order digits of xi are lost.
 } else {
 b <- (xi-t)+s #  intermediate step needed in R otherwise cancellation
 z <- z+b  # else low-order digits of sum are lost
 }
 s <- t
  }
  s+z   # correction only applied once in the very end
}

testx <-  c(1,10^100,1,-10^100)
neumaierSum(testx)

gives 2 as answer.

Berend Hasselman

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] make.unique rbind examples

2019-02-20 Thread Jonathan Carroll
Entirely by coincidence I just now discovered that this issue was raised in
2013 [1] with similar suggestions for improvements. My search did not
initially uncover this message, so apologies for the repost which was
actually a nearly 16 year later "bump" posting.

[1] https://stat.ethz.ch/pipermail/r-devel/2013-May/066727.html

Regards,

- Jonathan.

On Mon, 11 Feb. 2019, 9:19 pm Jonathan Carroll, 
wrote:

> The final two examples in ?make.unique do not appear to be relevant to
> that function, namely
>
> rbind(data.frame(x = 1), data.frame(x = 2), data.frame(x = 3))
> rbind(rbind(data.frame(x = 1), data.frame(x = 2)), data.frame(x = 3))
>
> both producing
>
>   x
> 1 1
> 2 2
> 3 3
>
> (identically) on R 3.4.3 and 3.5.1. Following a brief discussion on
> Twitter, Rich FitzJohn [1] identified that under R 1.8.0 (circa 2003,
> around the time these examples were added) the rownames for the output
> of these lines was c("1", "11", "12"). This suggests that perhaps the
> example was added to demonstrate behaviour which is no longer
> supported.
>
> A more relevant example might be
>
> data.frame(x = 1, x = 2, x = 3)
>
> producing
>
>   x x.1 x.2
> 1 1   2   3
>
> or, perhaps more in keeping with the original intention,
>
> rbind(data.frame(x = 1, row.names = "a"), data.frame(x = 2, row.names
> = "a"), data.frame(x = 3, row.names = "a"))
>
> producing output with rownames c("a", "a1", "a2").
>
> Regards,
>
> - Jonathan.
>
> [1] https://twitter.com/rgfitzjohn/status/1094885131532275712
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] patch for gregexpr(perl=TRUE)

2019-02-20 Thread Tomas Kalibera
Thanks, in R-devel 76138, I confirm it speeds up gregexpr() with pcre in 
Bill Dunlap's example from

https://stat.ethz.ch/pipermail/r-devel/2017-January/073577.html
(RegExprPCRE column)

The performance problem of StrSplitPCRE does not seem to be due to strlen().

Best
Tomas

On 2/19/19 9:37 PM, Toby Hocking wrote:

Hi all,

Several people have noticed that gregexpr is very slow for large subject
strings when perl=TRUE is specified.
-
https://stackoverflow.com/questions/31216299/r-faster-gregexpr-for-very-large-strings
-
http://r.789695.n4.nabble.com/strsplit-perl-TRUE-gregexpr-perl-TRUE-very-slow-for-long-strings-td4727902.html
- https://stat.ethz.ch/pipermail/r-help/2008-October/178451.html

I figured out the issue, which is fixed by changing 1 line of code in
src/main/grep.c -- there is a strlen function call which is currently
inside of the while loop over matches, and the patch moves it before the
loop.
https://github.com/tdhock/namedCapture-article/blob/master/linear-time-gregexpr-perl.patch

I made some figures that show the quadratic time complexity before applying
the patch, and the linear time complexity after applying the patch
https://github.com/tdhock/namedCapture-article#19-feb-2019

I would have posted a bug report on bugs.r-project.org but I do not have an
account. So can an R-devel person please either (1) accept this patch, or
(2) give me an account so I can post the patch on the bug tracker?

Finally I would like to mention that Bill Dunlap noticed a similar problem
(time complexity which is quadratic in subject size) for strsplit with
perl=TRUE. My patch does NOT fix that, but I suspect that a similar fix
could be accomplished (because I see that strlen is being called in a while
loop in do_strsplit as well).

Thanks
Toby Dylan Hocking

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Documentation for sd (stats) + suggestion

2019-02-20 Thread PatrickT
Oh thanks, missed that. I expected the explanation to be near the top under
"Description." I may have scanned for the word "sample", which doesn't
appear. I could have searched harder. Apologies for the noise.

On Tue, Feb 19, 2019 at 5:59 PM S Ellison  wrote:

> > As far as I can tell, the manual help page for ``sd``
> >
> > ?sd
> >
> > does not explicitly mention that the formula for the standard deviation
> is
> > the so-called "Bessel-corrected" formula (divide by n-1 rather than n).
>
> See Details, where it says
> "Details:
>
>  Like 'var' this uses denominator n - 1.
> "
>
>
>
> ***
> This email and any attachments are confidential. Any u...{{dropped:12}}

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Documentation for sd (stats) + suggestion

2019-02-20 Thread PatrickT
Indeed. Thanks for your suggestions.

To elaborate briefly. The ``quantile`` function offers 9 types of methods.
The ``sd`` function only one. The ``mad`` function offers ways to tweak the
bias correction. The ``sd`` function doesn't.

Are there good reasons against adding features to ``sd``? after all it must
be one of the most popular stats out there.

Moreover the default ``sd`` function, which divides by n-1, is not well
founded like the variance is. It's still biased for repeated small
samples...

While it's easy to roll your own function, I don't think we can expect
beginners to write something like:

sdp = function(x) sqrt(sum((x-mean(x))^2)/length(x))


On Wed, Feb 20, 2019 at 9:00 AM Dario Strbenac 
wrote:

> Good day,
>
> It is implemented by the CRAN package multicon. The function is named
> popsd. But it does seem like something R should provide without creating a
> package dependency.
>
> --
> Dario Strbenac
> University of Sydney
> Camperdown NSW 2050
> Australia
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Bug: time complexity of substring is quadratic as string size and number of substrings increases

2019-02-20 Thread Toby Hocking
Hi all, (and especially hi to Tomas Kalibera who accepted my patch sent
yesterday)

I believe that I have found another bug, this time in the substring
function. The use case that I am concerned with is when there is a single
(character scalar) text/subject, and many substrings to extract. For example

substring("", 1:4, 1:4)

or more generally,

N=1000
substring(paste(rep("A", N), collapse=""), 1:N, 1:N)

The problem I observe is that the time complexity is quadratic in N, as
shown on this figure
https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.png
source:
https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.R

I expected the time complexity to be linear in N.

The example above may seem contrived/trivial, but it is indeed relevant to
a number of packages (rex, rematch2, namedCapture) which provide functions
that use gregexpr and then substring to extract the text in the captured
sub-patterns. The figure
https://github.com/tdhock/namedCapture-article/blob/master/figure-trackDb-pkgs.png
shows the issue: these packages have quadratic time complexity, whereas
other packages (and the gregexpr function with perl=TRUE after applying the
patch discussed yesterday) have linear time complexity. I believe the
problem is the substring function. Source for this figure:
https://github.com/tdhock/namedCapture-article/blob/master/figure-trackDb-pkgs.R

I suspect that a fix can be accomplished by optimizing the implementation
of substring, for the special case when the text/subject is a single
element (character scalar). Right now I notice that the substring R code
uses rep_len so that the text/subject which is passed to the C code is a
character vector with the same length as the number of substrings to
extract. Maybe the C code is calling strlen for each of these (identical)
text/subject elements?

Anyway, it would be useful to have some feedback to make sure this is
indeed a bug before I post on bugzilla. (btw thanks Martin for signing me
up for an account)

Toby

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Bug: time complexity of substring is quadratic as string size and number of substrings increases

2019-02-20 Thread Toby Hocking
Update: I have observed that stringi::stri_sub is linear time complexity,
and it computes the same thing as base::substring. figure
https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.png
source:
https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.R

To me this is a clear indication of a bug in substring, but again it would
be nice to have some feedback/confirmation before posting on bugzilla.

Also this suggests a fix -- just need to copy whatever stringi::stri_sub is
doing.




On Wed, Feb 20, 2019 at 11:16 AM Toby Hocking  wrote:

> Hi all, (and especially hi to Tomas Kalibera who accepted my patch sent
> yesterday)
>
> I believe that I have found another bug, this time in the substring
> function. The use case that I am concerned with is when there is a single
> (character scalar) text/subject, and many substrings to extract. For example
>
> substring("", 1:4, 1:4)
>
> or more generally,
>
> N=1000
> substring(paste(rep("A", N), collapse=""), 1:N, 1:N)
>
> The problem I observe is that the time complexity is quadratic in N, as
> shown on this figure
> https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.png
> source:
> https://github.com/tdhock/namedCapture-article/blob/master/figure-substring-bug.R
>
> I expected the time complexity to be linear in N.
>
> The example above may seem contrived/trivial, but it is indeed relevant to
> a number of packages (rex, rematch2, namedCapture) which provide functions
> that use gregexpr and then substring to extract the text in the captured
> sub-patterns. The figure
> https://github.com/tdhock/namedCapture-article/blob/master/figure-trackDb-pkgs.png
> shows the issue: these packages have quadratic time complexity, whereas
> other packages (and the gregexpr function with perl=TRUE after applying the
> patch discussed yesterday) have linear time complexity. I believe the
> problem is the substring function. Source for this figure:
> https://github.com/tdhock/namedCapture-article/blob/master/figure-trackDb-pkgs.R
>
> I suspect that a fix can be accomplished by optimizing the implementation
> of substring, for the special case when the text/subject is a single
> element (character scalar). Right now I notice that the substring R code
> uses rep_len so that the text/subject which is passed to the C code is a
> character vector with the same length as the number of substrings to
> extract. Maybe the C code is calling strlen for each of these (identical)
> text/subject elements?
>
> Anyway, it would be useful to have some feedback to make sure this is
> indeed a bug before I post on bugzilla. (btw thanks Martin for signing me
> up for an account)
>
> Toby
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] code for sum function

2019-02-20 Thread William Dunlap via R-devel
Someone said it used a possibly platform-dependent
higher-than-double-precision type.

By the way, in my example involving rep(1/3, n) I neglected to include the
most precise
way to calculate the sum: n%/%3 + (n%%3)/3.

Bill Dunlap
TIBCO Software
wdunlap tibco.com


On Wed, Feb 20, 2019 at 2:45 PM Rampal Etienne 
wrote:

> Dear Will,
>
> This is exactly what I find.
> My point is thus that the sum function in R is not a naive sum nor a
> Kahansum (in all cases), but what algorithm is it using then?
>
> Cheers, Rampal
>
>
> On Tue, Feb 19, 2019, 19:08 William Dunlap 
>> The algorithm does make a differece.  You can use Kahan's summation
>> algorithm (https://en.wikipedia.org/wiki/Kahan_summation_algorithm) to
>> reduce the error compared to the naive summation algorithm.  E.g., in R
>> code:
>>
>> naiveSum <-
>> function(x) {
>>s <- 0.0
>>for(xi in x) s <- s + xi
>>s
>> }
>> kahanSum <- function (x)
>> {
>>s <- 0.0
>>c <- 0.0 # running compensation for lost low-order bits
>>for(xi in x) {
>>   y <- xi - c
>>   t <- s + y # low-order bits of y may be lost here
>>   c <- (t - s) - y
>>   s <- t
>>}
>>s
>> }
>>
>> > rSum <- vapply(c(1:20,10^(2:7)), function(n) sum(rep(1/7,n)), 0)
>> > rNaiveSum <- vapply(c(1:20,10^(2:7)), function(n) naiveSum(rep(1/7,n)),
>> 0)
>> > rKahanSum <- vapply(c(1:20,10^(2:7)), function(n) kahanSum(rep(1/7,n)),
>> 0)
>> >
>> > table(rSum == rNaiveSum)
>>
>> FALSE  TRUE
>>21 5
>> > table(rSum == rKahanSum)
>>
>> FALSE  TRUE
>> 323
>>
>>
>> Bill Dunlap
>> TIBCO Software
>> wdunlap tibco.com
>>
>>
>> On Tue, Feb 19, 2019 at 10:36 AM Paul Gilbert 
>> wrote:
>>
>>> (I didn't see anyone else answer this, so ...)
>>>
>>> You can probably find the R code in src/main/ but I'm not sure. You are
>>> talking about a very simple calculation, so it seems unlike that the
>>> algorithm is the cause of the difference. I have done much more
>>> complicated things and usually get machine precision comparisons. There
>>> are four possibilities I can think of that could cause (small)
>>> differences.
>>>
>>> 0/ Your code is wrong, but that seems unlikely on such a simple
>>> calculations.
>>>
>>> 1/ You are summing a very large number of numbers, in which case the sum
>>> can become very large compared to numbers being added, then things can
>>> get a bit funny.
>>>
>>> 2/ You are using single precision in fortran rather than double. Double
>>> is needed for all floating point numbers you use!
>>>
>>> 3/ You have not zeroed the double precision numbers in fortran. (Some
>>> compilers do not do this automatically and you have to specify it.) Then
>>> if you accidentally put singles, like a constant 0.0 rather than a
>>> constant 0.0D+0, into a double you will have small junk in the lower
>>> precision part.
>>>
>>> (I am assuming you are talking about a sum of reals, not integer or
>>> complex.)
>>>
>>> HTH,
>>> Paul Gilbert
>>>
>>> On 2/14/19 2:08 PM, Rampal Etienne wrote:
>>> > Hello,
>>> >
>>> > I am trying to write FORTRAN code to do the same as some R code I
>>> have.
>>> > I get (small) differences when using the sum function in R. I know
>>> there
>>> > are numerical routines to improve precision, but I have not been able
>>> to
>>> > figure out what algorithm R is using. Does anyone know this? Or where
>>> > can I find the code for the sum function?
>>> >
>>> > Regards,
>>> >
>>> > Rampal Etienne
>>> >
>>> > __
>>> > R-devel@r-project.org mailing list
>>> > https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] model.matrix.default() silently ignores bad contrasts.arg

2019-02-20 Thread Ben Bolker
An lme4 user pointed out  that
passing contrasts as a string or symbol to [g]lmer (which would work if
we were using `contrasts<-` to set contrasts on a factor variable) is
*silently ignored*. This goes back to model.matrix(), and seems bad
(this is a very easy mistake to make, because of the multitude of ways
to specify contrasts for factors in R  - e.g. options(contrasts=...);
setting contrasts on the specific factors; passing contrasts as a list
to the model function ... )

The relevant code is here:

https://github.com/wch/r-source/blob/trunk/src/library/stats/R/models.R#L578-L603

The following code shows the problem: a plain-vanilla model.matrix()
call with no contrasts argument, followed by two wrong contrasts
arguments, followed by a correct contrasts argument.

data(cbpp, package="lme4")
mf1 <- model.matrix(~period, data=cbpp)
mf2 <- model.matrix(~period, contrasts.arg="contr.sum", data=cbpp)
all.equal(mf1,mf2) ## TRUE
mf3 <- model.matrix(~period, contrasts.arg=contr.sum, data=cbpp)
all.equal(mf1,mf3)  ## TRUE
mf4 <- model.matrix(~period, contrasts.arg=list(period=contr.sum),
data=cbpp)
isTRUE(all.equal(mf1,mf4))  ## FALSE


  I've attached a potential patch for this, which is IMO the mildest
possible case (if contrasts.arg is non-NULL and not a list, it produces
a warning).  I haven't been able to test it because of some mysterious
issues I'm having with re-making R properly ...

  Thoughts?  Should I submit this as a bug report/patch?

  cheers
   Ben Bolker


Index: src/library/stats/R/models.R
===
--- src/library/stats/R/models.R	(revision 76140)
+++ src/library/stats/R/models.R	(working copy)
@@ -589,20 +589,23 @@
 contrasts(data[[nn]]) <- contr.funs[1 + isOF[nn]]
 ## it might be safer to have numerical contrasts:
 ##	  get(contr.funs[1 + isOF[nn]])(nlevels(data[[nn]]))
-if (!is.null(contrasts.arg) && is.list(contrasts.arg)) {
-if (is.null(namC <- names(contrasts.arg)))
-stop("invalid 'contrasts.arg' argument")
-for (nn in namC) {
-if (is.na(ni <- match(nn, namD)))
-warning(gettextf("variable '%s' is absent, its contrast will be ignored", nn),
-domain = NA)
-else {
-ca <- contrasts.arg[[nn]]
-if(is.matrix(ca)) contrasts(data[[ni]], ncol(ca)) <- ca
-else contrasts(data[[ni]]) <- contrasts.arg[[nn]]
-}
-}
-}
+if (!is.null(contrasts.arg)) {
+if (!is.list(contrasts.arg)) {
+warning("non-list contrasts argument ignored")
+} else {  ## contrasts.arg is a list
+if (is.null(namC <- names(contrasts.arg)))
+stop("'contrasts.arg' argument must be named")
+for (nn in namC) {
+if (is.na(ni <- match(nn, namD)))
+warning(gettextf("variable '%s' is absent, its contrast will be ignored", nn),
+domain = NA)
+else {
+ca <- contrasts.arg[[nn]]
+if(is.matrix(ca)) contrasts(data[[ni]], ncol(ca)) <- ca
+else contrasts(data[[ni]]) <- contrasts.arg[[nn]]
+}}
+} ## contrasts.arg is a list
+} ## non-null contrasts.arg
 } else { #  no rhs terms ('~1', or '~0'): internal model.matrix needs some variable
 	isF <- FALSE
 	data[["x"]] <- raw(nrow(data))
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] code for sum function

2019-02-20 Thread Tomas Kalibera
Dear Rampal,

you can download R source code in form of a tarball or from subversion, 
please see
https://cran.r-project.org/doc/manuals/R-admin.html#Obtaining-R
https://cran.r-project.org/doc/manuals/R-admin.html#Using-Subversion-and-rsync

There is also a web access to subversion, so specifically the sum is 
available in
https://svn.r-project.org/R/trunk/src/main/summary.c

The definition of LDOUBLE is here
https://svn.r-project.org/R/trunk/src/include/Defn.h

The index of R manuals is here
https://cran.r-project.org/manuals.html

The online documentation inside R gives for ?sum
"    Loss of accuracy can occur when summing values of different signs:
  this can even occur for sufficiently long integer inputs if the
  partial sums would cause integer overflow.  Where possible
  extended-precision accumulators are used, typically well supported
  with C99 and newer, but possibly platform-dependent.
"

Best,
Tomas


On 2/20/19 11:55 PM, Rampal Etienne wrote:
> Dear Tomas,
>
> Where do I find these files? Do they contain the code for the sum 
> function?
>
> What do you mean exactly with your point on long doubles? Where can I 
> find documentation on this?
>
> Cheers, Rampal
>
> On Mon, Feb 18, 2019, 15:38 Tomas Kalibera   wrote:
>
> See do_summary() in summary.c, rsum() for doubles. R uses long double
> type as accumulator on systems where available.
>
> Best,
> Tomas
>
> On 2/14/19 2:08 PM, Rampal Etienne wrote:
> > Hello,
> >
> > I am trying to write FORTRAN code to do the same as some R code I
> > have. I get (small) differences when using the sum function in R. I
> > know there are numerical routines to improve precision, but I
> have not
> > been able to figure out what algorithm R is using. Does anyone know
> > this? Or where can I find the code for the sum function?
> >
> > Regards,
> >
> > Rampal Etienne
> >
> > __
> > R-devel@r-project.org  mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
>
>


[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel