Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread Martin Maechler
> Ben Bolker 
> on Wed, 25 Mar 2020 21:09:16 -0400 writes:

> I've discovered an infelicity (I guess) in qbeta(): it's not a bug,
> since there's a clear warning about lack of convergence of the numerical
> algorithm ("full precision may not have been achieved").  I can work
> around this, but I'm curious why it happens and whether there's a better
> workaround -- it doesn't seem to be in a particularly extreme corner of
> parameter space. It happens, e.g., forthese parameters:

> phi <- 1.1
> i <- 0.01
> t <- 0.001
> shape1 = i/phi  ##  0.009090909
> shape2 = (1-i)/phi  ## 0.9
> qbeta(t,shape1,shape2)  ##  5.562685e-309
> ##  brute-force uniroot() version, see below
> Qbeta0(t,shape1,shape2)  ## 0.9262824

> The qbeta code is pretty scary to read: the warning "full precision
> may not have been achieved" is triggered here:

> 
https://github.com/wch/r-source/blob/f8d4d7d48051860cc695b99db9be9cf439aee743/src/nmath/qbeta.c#L530

> Any thoughts?

Well,  qbeta() is mostly based on inverting pbeta()  and pbeta()
has *several* "dangerous" corners in its parameter spaces
{in some cases, it makes sense to look at the 4 different cases
 log.p = TRUE/FALSE  //  lower.tail = TRUE/FALSE  separately ..}

pbeta() itself is based on the most complex numerical code in
all of base R, i.e., src/nmath/toms708.c  and that algorithm
(TOMS 708) had been sophisticated already when it was published,
and it has been improved and tweaked several times since being
part of R, notably for the log.p=TRUE case which had not been in
the focus of the publication and its algorithm.
[[ NB: part of this you can read when reading  help(pbeta)  to the end ! ]]

I've spent many "man weeks", or even "man months" on pbeta() and
qbeta(), already and have dreamed to get a good student do a
master's thesis about the problem and potential solutions I've
looked into in the mean time.

My current gut feeling is that in some cases, new approximations
are necessary (i.e. tweaking of current approximations is not
going to help sufficiently).

Also not (in the R sources)  tests/p-qbeta-strict-tst.R
a whole file of "regression tests" about  pbeta() and qbeta()
{where part of the true values have been computed with my CRAN
package Rmpfr (for high precision computation) with the
Rmpfr::pbetaI() function which gives arbitrarily precise pbeta()
values but only when  (a,b) are integers -- that's the "I" in pbetaI().

Yes, it's intriguing ... and I'll look into your special
findings a bit later today.


  > Should I report this on the bug list?

Yes, please.  Not all problem of pbeta() / qbeta() are part yet,
of R's bugzilla data base,  and maybe this will help to draw
more good applied mathematicians look into it.



Martin Maechler
ETH Zurich and R Core team
(I'd call myself the "dpq-hacker" within R core -- related to
 my CRAN package 'DPQ')


> A more general illustration:
> http://www.math.mcmaster.ca/bolker/misc/qbeta.png

> ===
> fun <- function(phi,i=0.01,t=0.001, f=qbeta) {
> f(t,shape1=i/phi,shape2=(1-i)/phi, lower.tail=FALSE)
> }
> ## brute-force beta quantile function
> Qbeta0 <- function(t,shape1,shape2,lower.tail=FALSE) {
> fn <- function(x) {pbeta(x,shape1,shape2,lower.tail=lower.tail)-t}
> uniroot(fn,interval=c(0,1))$root
> }
> Qbeta <- Vectorize(Qbeta0,c("t","shape1","shape2"))
> curve(fun,from=1,to=4)
> curve(fun(x,f=Qbeta),add=TRUE,col=2)

> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread peter dalgaard
It's a pretty extreme case, try e.g. curve(pbeta(x, shape1, shape2), n=10001), 
and (probably -- I can't be bothered to work out the relation between beta 
shapes and F df parameters this morning...) outside what is normally 
encountered in statistical analyses. Notice though, that you have lower=FALSE 
in your Qbeta0, so you should have it in qbeta as well:

> qbeta(t,shape1,shape2, lower=FALSE)  
[1] 0.949
Warning message:
In qbeta(t, shape1, shape2, lower = FALSE) :
  full precision may not have been achieved in 'qbeta'

which of course is still wrong (I dont't think there is a problem in the other 
tail, Qbeta0(t,, lower=TRUE) returns zero. 

You can see the effect also with

curve(qbeta(x, shape1, shape2), n=10001, from=.99, to=1)

which kind of suggests one of those regime-switching bugs, where different 
methods are used for various parts of the domain, and the cross-over is not 
done quite right. 

At any rate, qbeta is one of R's very basic workhorses, so we do want it to 
work right, so by all means file a report.

-pd

> On 26 Mar 2020, at 02:09 , Ben Bolker  wrote:
> 
> 
>  I've discovered an infelicity (I guess) in qbeta(): it's not a bug,
> since there's a clear warning about lack of convergence of the numerical
> algorithm ("full precision may not have been achieved").  I can work
> around this, but I'm curious why it happens and whether there's a better
> workaround -- it doesn't seem to be in a particularly extreme corner of
> parameter space. It happens, e.g., forthese parameters:
> 
> phi <- 1.1
> i <- 0.01
> t <- 0.001
> shape1 = i/phi  ##  0.009090909
> shape2 = (1-i)/phi  ## 0.9
> qbeta(t,shape1,shape2)  ##  5.562685e-309
> ##  brute-force uniroot() version, see below
> Qbeta0(t,shape1,shape2)  ## 0.9262824
> 
>  The qbeta code is pretty scary to read: the warning "full precision
> may not have been achieved" is triggered here:
> 
> https://github.com/wch/r-source/blob/f8d4d7d48051860cc695b99db9be9cf439aee743/src/nmath/qbeta.c#L530
> 
>  Any thoughts?  Should I report this on the bug list?
> 
> 
> A more general illustration:
> http://www.math.mcmaster.ca/bolker/misc/qbeta.png
> 
> ===
> fun <- function(phi,i=0.01,t=0.001, f=qbeta) {
>  f(t,shape1=i/phi,shape2=(1-i)/phi, lower.tail=FALSE)
> }
> ## brute-force beta quantile function
> Qbeta0 <- function(t,shape1,shape2,lower.tail=FALSE) {
>  fn <- function(x) {pbeta(x,shape1,shape2,lower.tail=lower.tail)-t}
>  uniroot(fn,interval=c(0,1))$root
> }
> Qbeta <- Vectorize(Qbeta0,c("t","shape1","shape2"))
> curve(fun,from=1,to=4)
> curve(fun(x,f=Qbeta),add=TRUE,col=2)
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread J C Nash
Given that a number of us are housebound, it might be a good time to try to
improve the approximation. It's not an area where I have much expertise, but in
looking at the qbeta.c code I see a lot of root-finding, where I do have some
background. However, I'm very reluctant to work alone on this, and will ask
interested others to email off-list. If there are others, I'll report back.

Ben: Do you have an idea of parameter region where approximation is poor?
I think that it would be smart to focus on that to start with.

Martin: On a separate precision matter, did you get my query early in year 
about double
length accumulation of inner products of vectors in Rmpfr? R-help more or
less implied that Rmpfr does NOT use extra length. I've been using David
Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
would be nice to have that in R. My attempts to find "easy" workarounds have
not been successful, but I'll admit that other things took precedence.

Best,

John Nash



On 2020-03-26 4:02 a.m., Martin Maechler wrote:
>> Ben Bolker 
>> on Wed, 25 Mar 2020 21:09:16 -0400 writes:
> 
> > I've discovered an infelicity (I guess) in qbeta(): it's not a bug,
> > since there's a clear warning about lack of convergence of the numerical
> > algorithm ("full precision may not have been achieved").  I can work
> > around this, but I'm curious why it happens and whether there's a better
> > workaround -- it doesn't seem to be in a particularly extreme corner of
> > parameter space. It happens, e.g., for  these parameters:
> 
> > phi <- 1.1
> > i <- 0.01
> > t <- 0.001
> > shape1 = i/phi  ##  0.009090909
> > shape2 = (1-i)/phi  ## 0.9
> > qbeta(t,shape1,shape2)  ##  5.562685e-309
> > ##  brute-force uniroot() version, see below
> > Qbeta0(t,shape1,shape2)  ## 0.9262824
> 
> > The qbeta code is pretty scary to read: the warning "full precision
> > may not have been achieved" is triggered here:
> 
> > 
> https://github.com/wch/r-source/blob/f8d4d7d48051860cc695b99db9be9cf439aee743/src/nmath/qbeta.c#L530
> 
> > Any thoughts?
> 
> Well,  qbeta() is mostly based on inverting pbeta()  and pbeta()
> has *several* "dangerous" corners in its parameter spaces
> {in some cases, it makes sense to look at the 4 different cases
>  log.p = TRUE/FALSE  //  lower.tail = TRUE/FALSE  separately ..}
> 
> pbeta() itself is based on the most complex numerical code in
> all of base R, i.e., src/nmath/toms708.c  and that algorithm
> (TOMS 708) had been sophisticated already when it was published,
> and it has been improved and tweaked several times since being
> part of R, notably for the log.p=TRUE case which had not been in
> the focus of the publication and its algorithm.
> [[ NB: part of this you can read when reading  help(pbeta)  to the end ! ]]
> 
> I've spent many "man weeks", or even "man months" on pbeta() and
> qbeta(), already and have dreamed to get a good student do a
> master's thesis about the problem and potential solutions I've
> looked into in the mean time.
> 
> My current gut feeling is that in some cases, new approximations
> are necessary (i.e. tweaking of current approximations is not
> going to help sufficiently).
> 
> Also not (in the R sources)  tests/p-qbeta-strict-tst.R
> a whole file of "regression tests" about  pbeta() and qbeta()
> {where part of the true values have been computed with my CRAN
> package Rmpfr (for high precision computation) with the
> Rmpfr::pbetaI() function which gives arbitrarily precise pbeta()
> values but only when  (a,b) are integers -- that's the "I" in pbetaI().
> 
> Yes, it's intriguing ... and I'll look into your special
> findings a bit later today.
> 
> 
>   > Should I report this on the bug list?
> 
> Yes, please.  Not all problem of pbeta() / qbeta() are part yet,
> of R's bugzilla data base,  and maybe this will help to draw
> more good applied mathematicians look into it.
> 
> 
> 
> Martin Maechler
> ETH Zurich and R Core team
> (I'd call myself the "dpq-hacker" within R core -- related to
>  my CRAN package 'DPQ')
> 
> 
> > A more general illustration:
> > http://www.math.mcmaster.ca/bolker/misc/qbeta.png
> 
> > ===
> > fun <- function(phi,i=0.01,t=0.001, f=qbeta) {
> > f(t,shape1=i/phi,shape2=(1-i)/phi, lower.tail=FALSE)
> > }
> > ## brute-force beta quantile function
> > Qbeta0 <- function(t,shape1,shape2,lower.tail=FALSE) {
> > fn <- function(x) {pbeta(x,shape1,shape2,lower.tail=lower.tail)-t}
> > uniroot(fn,interval=c(0,1))$root
> > }
> > Qbeta <- Vectorize(Qbeta0,c("t","shape1","shape2"))
> > curve(fun,from=1,to=4)
> > curve(fun(x,f=Qbeta),add=TRUE,col=2)
> 
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/m

Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread Martin Maechler
> J C Nash 
> on Thu, 26 Mar 2020 09:29:53 -0400 writes:

> Given that a number of us are housebound, it might be a good time to try 
to
> improve the approximation. It's not an area where I have much expertise, 
but in
> looking at the qbeta.c code I see a lot of root-finding, where I do have 
some
> background. However, I'm very reluctant to work alone on this, and will 
ask
> interested others to email off-list. If there are others, I'll report 
back.

Hi John.
Yes, qbeta() {in its "main branches"}  does zero finding, but
zero finding of   pbeta(...) - p*   and I tried to explain in my
last e-mail that the real problem is that already pbeta() is not
accurate enough in some unstable corners ...
The order fixing should typically be
1) fix pbeta()
2) look at qbeta() which now may not even need a fix because its
   problems may have been entirely a consequence of pbeta()'s inaccuracies.
   And if there are cases where the qbeta() problems are not
   only pbeta's "fault", it is still true that the fixes that
   would still be needed crucially depend on the detailed
   working of the function whose zero(s) are sought, i.e.,  pbeta()

> Ben: Do you have an idea of parameter region where approximation is poor?
> I think that it would be smart to focus on that to start with.



Rmpfr  matrix-/vector - products:

> Martin: On a separate precision matter, did you get my query early in 
year about double
> length accumulation of inner products of vectors in Rmpfr? R-help more or
> less implied that Rmpfr does NOT use extra length. I've been using David
> Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
> would be nice to have that in R. My attempts to find "easy" workarounds 
have
> not been successful, but I'll admit that other things took precedence.

Well, the current development version of 'Rmpfr' on R-forge now
contains facilities to enlarge the precision of the computations
by a factor 'fPrec' with default 'fPrec = 1';
notably, instead of  x %*% y   (where the `%*%` cannot have more
than two arguments) does have a counterpart  matmult(x,y, )
which allows more arguments, namely 'fPrec', or directly 'precBits';
and of course there are  crossprod() and tcrossprod() one should
use when applicable and they also got the  'fPrec' and
'precBits' arguments.

{The %*% etc precision increase still does not work optimally
 efficiency wise, as it simply increases the precision of all
 computations by just increasing the precision of x and y (the inputs)}.

The whole  Matrix and Matrix-vector arithmetic is still
comparibly slow in Rmpfr .. mostly because I valued human time
(mine!) much higher than computer time in its implementation.
That's one reason I would never want to double the precision
everywhere as it decreases speed even more, and often times
unnecessarily: doubling the accuracy is basically "worst-case
scenario" precaution

Martin

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread J C Nash
Despite the need to focus on pbeta, I'm still willing to put in some effort.
But I find it really helps to have 2-3 others involved, since the questions back
and forth keep matters moving forward. Volunteers?

Thanks to Martin for detailed comments.

JN


On 2020-03-26 10:34 a.m., Martin Maechler wrote:
>> J C Nash 
>> on Thu, 26 Mar 2020 09:29:53 -0400 writes:
> 
> > Given that a number of us are housebound, it might be a good time to 
> try to
> > improve the approximation. It's not an area where I have much 
> expertise, but in
> > looking at the qbeta.c code I see a lot of root-finding, where I do 
> have some
> > background. However, I'm very reluctant to work alone on this, and will 
> ask
> > interested others to email off-list. If there are others, I'll report 
> back.
> 
> Hi John.
> Yes, qbeta() {in its "main branches"}  does zero finding, but
> zero finding of   pbeta(...) - p*   and I tried to explain in my
> last e-mail that the real problem is that already pbeta() is not
> accurate enough in some unstable corners ...
> The order fixing should typically be
> 1) fix pbeta()
> 2) look at qbeta() which now may not even need a fix because its
>problems may have been entirely a consequence of pbeta()'s inaccuracies.
>And if there are cases where the qbeta() problems are not
>only pbeta's "fault", it is still true that the fixes that
>would still be needed crucially depend on the detailed
>working of the function whose zero(s) are sought, i.e.,  pbeta()
> 
> > Ben: Do you have an idea of parameter region where approximation is 
> poor?
> > I think that it would be smart to focus on that to start with.
> 
> 
> 
> Rmpfr  matrix-/vector - products:
> 
> > Martin: On a separate precision matter, did you get my query early in 
> year about double
> > length accumulation of inner products of vectors in Rmpfr? R-help more 
> or
> > less implied that Rmpfr does NOT use extra length. I've been using David
> > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
> > would be nice to have that in R. My attempts to find "easy" workarounds 
> have
> > not been successful, but I'll admit that other things took precedence.
> 
> Well, the current development version of 'Rmpfr' on R-forge now
> contains facilities to enlarge the precision of the computations
> by a factor 'fPrec' with default 'fPrec = 1';
> notably, instead of  x %*% y   (where the `%*%` cannot have more
> than two arguments) does have a counterpart  matmult(x,y, )
> which allows more arguments, namely 'fPrec', or directly 'precBits';
> and of course there are  crossprod() and tcrossprod() one should
> use when applicable and they also got the  'fPrec' and
> 'precBits' arguments.
> 
> {The %*% etc precision increase still does not work optimally
>  efficiency wise, as it simply increases the precision of all
>  computations by just increasing the precision of x and y (the inputs)}.
> 
> The whole  Matrix and Matrix-vector arithmetic is still
> comparibly slow in Rmpfr .. mostly because I valued human time
> (mine!) much higher than computer time in its implementation.
> That's one reason I would never want to double the precision
> everywhere as it decreases speed even more, and often times
> unnecessarily: doubling the accuracy is basically "worst-case
> scenario" precaution
> 
> Martin
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread Ravi Varadhan
This is also strange:


qbeta <- function (p, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p = FALSE)
{
if (missing(ncp))
.Call(C_qbeta, p, shape1, shape2, lower.tail, log.p)
else .Call(C_qnbeta, p, shape1, shape2, ncp, lower.tail,
log.p)
}




Since the default value is 0 for non-centrality, it seems like the logic above 
is wrong. When ncp=0, C_qnbeta would be called rather than C_qbeta.

Am I missing something?



Ravi




From: R-devel  on behalf of J C Nash 

Sent: Thursday, March 26, 2020 10:40:05 AM
To: Martin Maechler
Cc: r-devel@r-project.org
Subject: Re: [Rd] unstable corner of parameter space for qbeta?


Despite the need to focus on pbeta, I'm still willing to put in some effort.
But I find it really helps to have 2-3 others involved, since the questions back
and forth keep matters moving forward. Volunteers?

Thanks to Martin for detailed comments.

JN


On 2020-03-26 10:34 a.m., Martin Maechler wrote:
>> J C Nash
>> on Thu, 26 Mar 2020 09:29:53 -0400 writes:
>
> > Given that a number of us are housebound, it might be a good time to 
> try to
> > improve the approximation. It's not an area where I have much 
> expertise, but in
> > looking at the qbeta.c code I see a lot of root-finding, where I do 
> have some
> > background. However, I'm very reluctant to work alone on this, and will 
> ask
> > interested others to email off-list. If there are others, I'll report 
> back.
>
> Hi John.
> Yes, qbeta() {in its "main branches"}  does zero finding, but
> zero finding of   pbeta(...) - p*   and I tried to explain in my
> last e-mail that the real problem is that already pbeta() is not
> accurate enough in some unstable corners ...
> The order fixing should typically be
> 1) fix pbeta()
> 2) look at qbeta() which now may not even need a fix because its
>problems may have been entirely a consequence of pbeta()'s inaccuracies.
>And if there are cases where the qbeta() problems are not
>only pbeta's "fault", it is still true that the fixes that
>would still be needed crucially depend on the detailed
>working of the function whose zero(s) are sought, i.e.,  pbeta()
>
> > Ben: Do you have an idea of parameter region where approximation is 
> poor?
> > I think that it would be smart to focus on that to start with.
>
> 
>
> Rmpfr  matrix-/vector - products:
>
> > Martin: On a separate precision matter, did you get my query early in 
> year about double
> > length accumulation of inner products of vectors in Rmpfr? R-help more 
> or
> > less implied that Rmpfr does NOT use extra length. I've been using David
> > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
> > would be nice to have that in R. My attempts to find "easy" workarounds 
> have
> > not been successful, but I'll admit that other things took precedence.
>
> Well, the current development version of 'Rmpfr' on R-forge now
> contains facilities to enlarge the precision of the computations
> by a factor 'fPrec' with default 'fPrec = 1';
> notably, instead of  x %*% y   (where the `%*%` cannot have more
> than two arguments) does have a counterpart  matmult(x,y, )
> which allows more arguments, namely 'fPrec', or directly 'precBits';
> and of course there are  crossprod() and tcrossprod() one should
> use when applicable and they also got the  'fPrec' and
> 'precBits' arguments.
>
> {The %*% etc precision increase still does not work optimally
>  efficiency wise, as it simply increases the precision of all
>  computations by just increasing the precision of x and y (the inputs)}.
>
> The whole  Matrix and Matrix-vector arithmetic is still
> comparibly slow in Rmpfr .. mostly because I valued human time
> (mine!) much higher than computer time in its implementation.
> That's one reason I would never want to double the precision
> everywhere as it decreases speed even more, and often times
> unnecessarily: doubling the accuracy is basically "worst-case
> scenario" precaution
>
> Martin
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Rebuilding and re-checking of downstream dependencies on CRAN Mac build machines

2020-03-26 Thread Winston Chang
I have two questions about the CRAN machines that build binary
packages for Mac. When a new version of a package is released,
  (A) Do the downstream dependencies get re-checked?
  (B) Do the downstream dependencies get re-built?

I have heard (but do not know for sure) that the answer to (A) is no,
the downstream dependencies do not get rechecked.

>From publicly available information on the CRAN web server, it looks
like the answer to (B) is also no, the downstream dependencies do not
get rebuilt. Looking at
https://www.r-project.org/nosvn/R.check/r-release-osx-x86_64/, I see
the following dates for these binary packages:

- Rcpp_1.0.4.tgz: 2020-03-18
- httpuv_1.5.2.tgz: 2019-09-12
- dplyr_0.8.5.tgz: 2020-03-08

Rcpp was released recently, and httpuv and dplyr (which are downstream
dependencies of Rcpp) have older dates, which indicates that these
binary packages were not rebuilt when Rcpp was released.

In my particular case, I'm interested in the httpuv package (which I
maintain). I and several others have not been able to get the CRAN
version of httpuv to compile using the CRAN version of Rcpp on Mac.
(It seems to compile fine on other platforms.) I have heard from
maintainers of other Rcpp-dependent packages that they also can't get
their packages to compile on Mac, using both the default Mac compiler
toolchain and the CRAN-recommended toolchain, which uses clang 7.

For more technical details about the cause of breakage, see:
https://github.com/RcppCore/Rcpp/issues/1060
https://github.com/rstudio/httpuv/issues/260

If the CRAN Mac build machine is indeed able to build httpuv against
the current version of Rcpp, it would be really helpful to have more
information about the system configuration. If it is not able to
rebuild httpuv and other packages against Rcpp, then this is a
problem. Among other things, it prevents people from building their
packages from source using CRAN versions of packages, and it also
means that none of these packages can release a new version, because
the binaries can't be built on Mac.

-Winston

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread Martin Maechler
> Ravi Varadhan 
> on Thu, 26 Mar 2020 18:33:43 + writes:

> This is also strange:
> qbeta <- function (p, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p = 
FALSE)
> {
> if (missing(ncp))
> .Call(C_qbeta, p, shape1, shape2, lower.tail, log.p)
> else .Call(C_qnbeta, p, shape1, shape2, ncp, lower.tail,
> log.p)
> }




> Since the default value is 0 for non-centrality, it seems like the logic 
above is wrong. When ncp=0, C_qnbeta would be called rather than C_qbeta.

> Am I missing something?

Yes.  This has been on purpose (forever) and I think the help
page mentions that - though probably a bit subtly.

The way it is now, one can use both algorithms to compute what
in principle should be the main thing.


> Ravi



> 
> From: R-devel  on behalf of J C Nash 

> Sent: Thursday, March 26, 2020 10:40:05 AM
> To: Martin Maechler
> Cc: r-devel@r-project.org
> Subject: Re: [Rd] unstable corner of parameter space for qbeta?


> Despite the need to focus on pbeta, I'm still willing to put in some 
effort.
> But I find it really helps to have 2-3 others involved, since the 
questions back
> and forth keep matters moving forward. Volunteers?

> Thanks to Martin for detailed comments.

> JN


> On 2020-03-26 10:34 a.m., Martin Maechler wrote:
>>> J C Nash
>>> on Thu, 26 Mar 2020 09:29:53 -0400 writes:
>> 
>> > Given that a number of us are housebound, it might be a good time to 
try to
>> > improve the approximation. It's not an area where I have much 
expertise, but in
>> > looking at the qbeta.c code I see a lot of root-finding, where I do 
have some
>> > background. However, I'm very reluctant to work alone on this, and 
will ask
>> > interested others to email off-list. If there are others, I'll report 
back.
>> 
>> Hi John.
>> Yes, qbeta() {in its "main branches"}  does zero finding, but
>> zero finding of   pbeta(...) - p*   and I tried to explain in my
>> last e-mail that the real problem is that already pbeta() is not
>> accurate enough in some unstable corners ...
>> The order fixing should typically be
>> 1) fix pbeta()
>> 2) look at qbeta() which now may not even need a fix because its
>> problems may have been entirely a consequence of pbeta()'s inaccuracies.
>> And if there are cases where the qbeta() problems are not
>> only pbeta's "fault", it is still true that the fixes that
>> would still be needed crucially depend on the detailed
>> working of the function whose zero(s) are sought, i.e.,  pbeta()
>> 
>> > Ben: Do you have an idea of parameter region where approximation is 
poor?
>> > I think that it would be smart to focus on that to start with.
>> 
>> 
>> 
>> Rmpfr  matrix-/vector - products:
>> 
>> > Martin: On a separate precision matter, did you get my query early in 
year about double
>> > length accumulation of inner products of vectors in Rmpfr? R-help more 
or
>> > less implied that Rmpfr does NOT use extra length. I've been using 
David
>> > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
>> > would be nice to have that in R. My attempts to find "easy" 
workarounds have
>> > not been successful, but I'll admit that other things took precedence.
>> 
>> Well, the current development version of 'Rmpfr' on R-forge now
>> contains facilities to enlarge the precision of the computations
>> by a factor 'fPrec' with default 'fPrec = 1';
>> notably, instead of  x %*% y   (where the `%*%` cannot have more
>> than two arguments) does have a counterpart  matmult(x,y, )
>> which allows more arguments, namely 'fPrec', or directly 'precBits';
>> and of course there are  crossprod() and tcrossprod() one should
>> use when applicable and they also got the  'fPrec' and
>> 'precBits' arguments.
>> 
>> {The %*% etc precision increase still does not work optimally
>> efficiency wise, as it simply increases the precision of all
>> computations by just increasing the precision of x and y (the inputs)}.
>> 
>> The whole  Matrix and Matrix-vector arithmetic is still
>> comparibly slow in Rmpfr .. mostly because I valued human time
>> (mine!) much higher than computer time in its implementation.
>> That's one reason I would never want to double the precision
>> everywhere as it decreases speed even more, and often times
>> unnecessarily: doubling the accuracy is basically "worst-case
>> scenario" precaution
>> 
>> Martin
>> 

> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/ma

Re: [Rd] Rebuilding and re-checking of downstream dependencies on CRAN Mac build machines

2020-03-26 Thread Simon Urbanek
Winston,

the Mac CRAN build builds a package only if either is true:
1) the package has not passed checks
2) there is a new version of the package since last successful build+check

The old build machine doesn't have the capacity to do full re-builds (it would 
take over 24h - currently the nightly build of packages takes 16-22 hours), but 
we're currently building a new setup for R 4.0.0 on new hardware and as a part 
of it we are planning to setup a "mac-builder" similar to what is currently 
available for Windows.

That said, I have run httpuv by hand on the CRAN build machine (against Rcpp 
1.0.4) and I saw no issues. I have seen the discussion on Rcpp, but so far no 
one actually posted details of what is breaking (nor do your links include any 
actual details on this). I'd love to help, but the lack fo a useful report 
makes this impossible. If you have any actual leads, please post them. The CRAN 
machine uses the tools that are available on CRAN: 
https://cran.r-project.org/bin/macosx/tools/ (clang-7 and gfortran-6.1 for 
3.6.x)

Cheers,
Simon


> On 27/03/2020, at 7:38 AM, Winston Chang  wrote:
> 
> I have two questions about the CRAN machines that build binary
> packages for Mac. When a new version of a package is released,
>  (A) Do the downstream dependencies get re-checked?
>  (B) Do the downstream dependencies get re-built?
> 
> I have heard (but do not know for sure) that the answer to (A) is no,
> the downstream dependencies do not get rechecked.
> 
> From publicly available information on the CRAN web server, it looks
> like the answer to (B) is also no, the downstream dependencies do not
> get rebuilt. Looking at
> https://www.r-project.org/nosvn/R.check/r-release-osx-x86_64/, I see
> the following dates for these binary packages:
> 
> - Rcpp_1.0.4.tgz: 2020-03-18
> - httpuv_1.5.2.tgz: 2019-09-12
> - dplyr_0.8.5.tgz: 2020-03-08
> 
> Rcpp was released recently, and httpuv and dplyr (which are downstream
> dependencies of Rcpp) have older dates, which indicates that these
> binary packages were not rebuilt when Rcpp was released.
> 
> In my particular case, I'm interested in the httpuv package (which I
> maintain). I and several others have not been able to get the CRAN
> version of httpuv to compile using the CRAN version of Rcpp on Mac.
> (It seems to compile fine on other platforms.) I have heard from
> maintainers of other Rcpp-dependent packages that they also can't get
> their packages to compile on Mac, using both the default Mac compiler
> toolchain and the CRAN-recommended toolchain, which uses clang 7.
> 
> For more technical details about the cause of breakage, see:
> https://github.com/RcppCore/Rcpp/issues/1060
> https://github.com/rstudio/httpuv/issues/260
> 
> If the CRAN Mac build machine is indeed able to build httpuv against
> the current version of Rcpp, it would be really helpful to have more
> information about the system configuration. If it is not able to
> rebuild httpuv and other packages against Rcpp, then this is a
> problem. Among other things, it prevents people from building their
> packages from source using CRAN versions of packages, and it also
> means that none of these packages can release a new version, because
> the binaries can't be built on Mac.
> 
> -Winston
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rebuilding and re-checking of downstream dependencies on CRAN Mac build machines

2020-03-26 Thread Hadley Wickham
If I do install.packages("dplyr", type = "source"), I see:

Installing package into ‘/Users/hadley/R’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/src/contrib/dplyr_0.8.5.tar.gz'
Content type 'application/x-gzip' length 1378766 bytes (1.3 MB)
==
downloaded 1.3 MB

* installing *source* package ‘dplyr’ ...
** package ‘dplyr’ successfully unpacked and MD5 sums checked
** using staged installation
** libs
ccache clang++ -Qunused-arguments
 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG
-I../inst/include -DRCPP_DEFAULT_INCLUDE_CALL=false -DCOMPILING_DPLYR
-DRCPP_USING_UTF8_ERROR_STRING -DRCPP_USE_UNWIND_PROTECT
-DBOOST_NO_AUTO_PTR  -I"/Users/hadley/R/BH/include"
-I"/Users/hadley/R/plogr/include" -I"/Users/hadley/R/Rcpp/include"
-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
-I/usr/local/include  -fPIC  -Wall -g -O2  -c RcppExports.cpp -o
RcppExports.o
In file included from RcppExports.cpp:4:
In file included from ./../inst/include/dplyr.h:4:
In file included from ../inst/include/dplyr/main.h:6:
In file included from ../inst/include/dplyr/workarounds/static_assert.h:17:
In file included from /Users/hadley/R/BH/include/boost/config.hpp:57:
In file included from
/Users/hadley/R/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/hadley/R/BH/include/boost/config/detail/posix_features.hpp:18:
In file included from
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:655:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/gethostuuid.h:39:17:
error: unknown type name 'uuid_t'
int gethostuuid(uuid_t, const struct timespec *)
__OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA);
^
In file included from RcppExports.cpp:4:
In file included from ./../inst/include/dplyr.h:4:
In file included from ../inst/include/dplyr/main.h:6:
In file included from ../inst/include/dplyr/workarounds/static_assert.h:17:
In file included from /Users/hadley/R/BH/include/boost/config.hpp:57:
In file included from
/Users/hadley/R/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/hadley/R/BH/include/boost/config/detail/posix_features.hpp:18:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:662:27:
error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int  getsgroups_np(int *, uuid_t);
  ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31:
note: 'uid_t' declared here
typedef __darwin_uid_tuid_t;
  ^
In file included from RcppExports.cpp:4:
In file included from ./../inst/include/dplyr.h:4:
In file included from ../inst/include/dplyr/main.h:6:
In file included from ../inst/include/dplyr/workarounds/static_assert.h:17:
In file included from /Users/hadley/R/BH/include/boost/config.hpp:57:
In file included from
/Users/hadley/R/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/hadley/R/BH/include/boost/config/detail/posix_features.hpp:18:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:664:27:
error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int  getwgroups_np(int *, uuid_t);
  ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31:
note: 'uid_t' declared here
typedef __darwin_uid_tuid_t;
  ^
In file included from RcppExports.cpp:4:
In file included from ./../inst/include/dplyr.h:4:
In file included from ../inst/include/dplyr/main.h:6:
In file included from ../inst/include/dplyr/workarounds/static_assert.h:17:
In file included from /Users/hadley/R/BH/include/boost/config.hpp:57:
In file included from
/Users/hadley/R/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/hadley/R/BH/include/boost/config/detail/posix_features.hpp:18:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:727:31:
error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int  setsgroups_np(int, const uuid_t);
  ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31:
note: 'uid_t' declared here
typedef __darwin_uid_tuid_t;
  ^
In file included from RcppExports.cpp:4:
In file included from ./../inst/include/dplyr.h:4:
In file included from ../inst/include/dplyr/main.h:6:
In file included from ../inst/include/dplyr/workarounds/static_assert.h:17:
In file included from /Users/hadley/R/BH/include/boost/config.hpp:57:
In file included from
/Users/hadley/R/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/hadley/R/BH/include/boost/config/detail/posix_features.hpp:18:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:729:31:
error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int  setwgroups_np(

Re: [Rd] Rebuilding and re-checking of downstream dependencies on CRAN Mac build machines

2020-03-26 Thread Winston Chang
Simon,

The link I provided to the httpuv issue has detailed information about
the error:
  https://github.com/rstudio/httpuv/issues/260

The error occurs when I run the following (although note, depending on
which mirror you use, the recent CRAN server issues may result in
problems downloading the packages):
  install.packages("Rcpp")
  install.packages("httpuv", type = "source")

Here's the full output:
  https://gist.github.com/wch/c70b438381c9d2a8b1f917b054e0ba7e

Here's an example of the errors:

In file included from staticpath.cpp:1:
In file included from ./staticpath.h:7:
In file included from /Users/winston/R/3.6/BH/include/boost/optional.hpp:15:
In file included from
/Users/winston/R/3.6/BH/include/boost/optional/optional.hpp:28:
In file included from
/Users/winston/R/3.6/BH/include/boost/core/addressof.hpp:17:
In file included from /Users/winston/R/3.6/BH/include/boost/config.hpp:57:
In file included from
/Users/winston/R/3.6/BH/include/boost/config/platform/macos.hpp:28:
In file included from
/Users/winston/R/3.6/BH/include/boost/config/detail/posix_features.hpp:18:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:664:27:
error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int  getwgroups_np(int *, uuid_t);
  ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31:
note: 'uid_t' declared here
typedef __darwin_uid_tuid_t;
  ^

This issue describes the same problem:
  https://github.com/RcppCore/Rcpp/issues/1046

And it has been fixed in the development version of Rcpp by:
   https://github.com/RcppCore/Rcpp/pull/1047

I know that Kevin Ushey has tried building with the CRAN-recommended
clang 7 toolchain and encountered the same errors. I haven't done that
yet myself, though.

-Winston

On Thu, Mar 26, 2020 at 4:00 PM Simon Urbanek
 wrote:
>
> Winston,
>
> the Mac CRAN build builds a package only if either is true:
> 1) the package has not passed checks
> 2) there is a new version of the package since last successful build+check
>
> The old build machine doesn't have the capacity to do full re-builds (it 
> would take over 24h - currently the nightly build of packages takes 16-22 
> hours), but we're currently building a new setup for R 4.0.0 on new hardware 
> and as a part of it we are planning to setup a "mac-builder" similar to what 
> is currently available for Windows.
>
> That said, I have run httpuv by hand on the CRAN build machine (against Rcpp 
> 1.0.4) and I saw no issues. I have seen the discussion on Rcpp, but so far no 
> one actually posted details of what is breaking (nor do your links include 
> any actual details on this). I'd love to help, but the lack fo a useful 
> report makes this impossible. If you have any actual leads, please post them. 
> The CRAN machine uses the tools that are available on CRAN: 
> https://cran.r-project.org/bin/macosx/tools/ (clang-7 and gfortran-6.1 for 
> 3.6.x)
>
> Cheers,
> Simon
>
>
> > On 27/03/2020, at 7:38 AM, Winston Chang  wrote:
> >
> > I have two questions about the CRAN machines that build binary
> > packages for Mac. When a new version of a package is released,
> >  (A) Do the downstream dependencies get re-checked?
> >  (B) Do the downstream dependencies get re-built?
> >
> > I have heard (but do not know for sure) that the answer to (A) is no,
> > the downstream dependencies do not get rechecked.
> >
> > From publicly available information on the CRAN web server, it looks
> > like the answer to (B) is also no, the downstream dependencies do not
> > get rebuilt. Looking at
> > https://www.r-project.org/nosvn/R.check/r-release-osx-x86_64/, I see
> > the following dates for these binary packages:
> >
> > - Rcpp_1.0.4.tgz: 2020-03-18
> > - httpuv_1.5.2.tgz: 2019-09-12
> > - dplyr_0.8.5.tgz: 2020-03-08
> >
> > Rcpp was released recently, and httpuv and dplyr (which are downstream
> > dependencies of Rcpp) have older dates, which indicates that these
> > binary packages were not rebuilt when Rcpp was released.
> >
> > In my particular case, I'm interested in the httpuv package (which I
> > maintain). I and several others have not been able to get the CRAN
> > version of httpuv to compile using the CRAN version of Rcpp on Mac.
> > (It seems to compile fine on other platforms.) I have heard from
> > maintainers of other Rcpp-dependent packages that they also can't get
> > their packages to compile on Mac, using both the default Mac compiler
> > toolchain and the CRAN-recommended toolchain, which uses clang 7.
> >
> > For more technical details about the cause of breakage, see:
> > https://github.com/RcppCore/Rcpp/issues/1060
> > https://github.com/rstudio/httpuv/issues/260
> >
> > If the CRAN Mac build machine is indeed able to build httpuv against
> > the current version of Rcpp, it would be really helpful to have more
> > information about the system configuration. If it is not able to
> > rebuild httpuv and 

Re: [Rd] unstable corner of parameter space for qbeta?

2020-03-26 Thread Ben Bolker



On 2020-03-26 4:02 a.m., Martin Maechler wrote:
>> Ben Bolker 
>> on Wed, 25 Mar 2020 21:09:16 -0400 writes:
> 
> > I've discovered an infelicity (I guess) in qbeta(): it's not a bug,
> > since there's a clear warning about lack of convergence of the numerical
> > algorithm ("full precision may not have been achieved").  I can work
> > around this, but I'm curious why it happens and whether there's a better
> > workaround -- it doesn't seem to be in a particularly extreme corner of
> > parameter space. It happens, e.g., for  these parameters:
> 
> > phi <- 1.1
> > i <- 0.01
> > t <- 0.001
> > shape1 = i/phi  ##  0.009090909
> > shape2 = (1-i)/phi  ## 0.9
> > qbeta(t,shape1,shape2)  ##  5.562685e-309
> > ##  brute-force uniroot() version, see below
> > Qbeta0(t,shape1,shape2)  ## 0.9262824
> 
> > The qbeta code is pretty scary to read: the warning "full precision
> > may not have been achieved" is triggered here:
> 
> > 
> https://github.com/wch/r-source/blob/f8d4d7d48051860cc695b99db9be9cf439aee743/src/nmath/qbeta.c#L530
> 
> > Any thoughts?
> 
> Well,  qbeta() is mostly based on inverting pbeta()  and pbeta()
> has *several* "dangerous" corners in its parameter spaces
> {in some cases, it makes sense to look at the 4 different cases
>  log.p = TRUE/FALSE  //  lower.tail = TRUE/FALSE  separately ..}
> 
> pbeta() itself is based on the most complex numerical code in
> all of base R, i.e., src/nmath/toms708.c  and that algorithm
> (TOMS 708) had been sophisticated already when it was published,
> and it has been improved and tweaked several times since being
> part of R, notably for the log.p=TRUE case which had not been in
> the focus of the publication and its algorithm.
> [[ NB: part of this you can read when reading  help(pbeta)  to the end ! ]]
> 
> I've spent many "man weeks", or even "man months" on pbeta() and
> qbeta(), already and have dreamed to get a good student do a
> master's thesis about the problem and potential solutions I've
> looked into in the mean time.
> 
> My current gut feeling is that in some cases, new approximations
> are necessary (i.e. tweaking of current approximations is not
> going to help sufficiently).
> 
> Also not (in the R sources)  tests/p-qbeta-strict-tst.R
> a whole file of "regression tests" about  pbeta() and qbeta()
> {where part of the true values have been computed with my CRAN
> package Rmpfr (for high precision computation) with the
> Rmpfr::pbetaI() function which gives arbitrarily precise pbeta()
> values but only when  (a,b) are integers -- that's the "I" in pbetaI().
> 
> Yes, it's intriguing ... and I'll look into your special
> findings a bit later today.
> 
> 
>   > Should I report this on the bug list?
> 
> Yes, please.  Not all problem of pbeta() / qbeta() are part yet,
> of R's bugzilla data base,  and maybe this will help to draw
> more good applied mathematicians look into it.

  Will report.

  I'm not at all surprised that this is a super-tough problem.  The only
part that was surprising to me was that my naive uniroot-based solution
worked (for this particular corner of parameter space where qbeta() has
trouble: it was terrible elsewhere, so now I'm using a hybrid solution
where I use my brute-force uniroot thing if I get a warning from qbeta().

I hesitated to even bring it up because I know you're really busy, but I
figured it was better to tag it now and let you deal with it some time
later.

Bugzilla report at https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17746

  cheers
   Ben Bolker


> 
> 
> 
> Martin Maechler
> ETH Zurich and R Core team
> (I'd call myself the "dpq-hacker" within R core -- related to
>  my CRAN package 'DPQ')
> 
> 
> > A more general illustration:
> > http://www.math.mcmaster.ca/bolker/misc/qbeta.png
> 
> > ===
> > fun <- function(phi,i=0.01,t=0.001, f=qbeta) {
> > f(t,shape1=i/phi,shape2=(1-i)/phi, lower.tail=FALSE)
> > }
> > ## brute-force beta quantile function
> > Qbeta0 <- function(t,shape1,shape2,lower.tail=FALSE) {
> > fn <- function(x) {pbeta(x,shape1,shape2,lower.tail=lower.tail)-t}
> > uniroot(fn,interval=c(0,1))$root
> > }
> > Qbeta <- Vectorize(Qbeta0,c("t","shape1","shape2"))
> > curve(fun,from=1,to=4)
> > curve(fun(x,f=Qbeta),add=TRUE,col=2)
> 
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Expressions from boxplot() passed to bxp()

2020-03-26 Thread Marius Hofert
Hi,

Is this expected behavior (R-3.6.0)?

dat <- cbind(x = 1:10, y = 10:1)
ylab <- substitute(X[t], list(t = 2))
plot(dat, ylab = ylab) # works (correctly displays ylab)
boxplot(dat, ylab = ylab) # fails
boxplot(dat, ylab = as.expression(ylab)) # works

Thanks & cheers,
M

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel