Re: [Rd] Rmpfr: build vector sequentially -- c(.) not working

2018-10-27 Thread Jerry Lewis
Thank you for your detailed response.  I subsequently noticed that
   sapply(vec,fn) 
also fails if the function fn returns an mpfr object.  Will the next version of 
Rmpfr also fix this usage?

I do enjoy using Rmpfr, and appreciate all that you have done in bringing this 
capability to the R community.

Thanks,
Jerry

-Original Message-
From: Martin Maechler  
Sent: Friday, October 26, 2018 5:29 AM
To: R Development List 
Cc: Martin Maechler (maech...@stat.math.ethz.ch) 
Subject: Re: Rmpfr: build vector sequentially -- c(.) not working

I've been asked in private,
but am answering in public so others can comment / or find this answer in the 
future after a web search.

This is about the package 'Rmpfr' (R interface to MPFR, the GNU C library for 
arbitrary precise numbers).

> How can you build a vector of mpfr numbers sequentially?
> Typically I would do something like the following (and try to replace 
> the for loop with some sort of apply construct)
> 
> vec <- NULL
> for(...) {
>...
>vec <- c(vec, vec.new)
> }

Dear Jerry,

In general the above is *bad* R code in the sense that it is unnecessarily 
inefficient.
In a typical for() loop you know the length of the result in advance, and

 vec <- numeric(n)
 for(i in seq_along(vec)) {

   vec[i] <- .
 }

is *MUCH* faster than your construct  when  n  is not small.

Still, I understand that you would expect  mpfr numbers to work as well as many 
other R objects here -- and they don't :

> However, this does not work with mpfr's.  For instance
>c(NULL, 1:3)
> is equivalent to
>1:3
> 
> But
>c(NULL, mpfr(1:3,100))
> is not even the same class as
>mpfr(1:3,100)

Indeed.  One can consider this to be unfortunate, but it's nothing I can change 
as author of 'Rmpfr'.

Rather, this is a shortcoming of R's current implementation of  c() which I 
think may be very hard to be changed in R without either losing to much speed 
or changing semantic in a too radical way.
[but I'm happy if I'm  proven wrong here  !!  ==> that's why posting to R-devel]

- - - -

Anyway, now to solve your problem, if you really want to do something like your 
original code, you can do it like this :

mNUL <- mpfr(logical())  # instead of 'NULL'

Then, e.g.,

 vec <- mNUL
 for(i in 1:10) {
vec <- c(vec, mpfr(i^2, 88))
 }

works fine.

In the next version of Rmpfr, both

   as(NULL, "mpfr")
   mpfr(NULL)

will also give the 'mNUL' above.

I hope you enjoy using Rmpfr!

Best regards,
Martin


Martin Maechler
ETH Zurich

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Wish List: Extensions to the derivatives table

2017-02-17 Thread Jerry Lewis
The derivative table resides in the function D.  In S+ that table is extensible 
because it is written in the S language.  R is faster but less flexible, since 
that table is programmed in C.  It would be useful if R provided a mechanism 
for extending the derivative table, or barring that, provided a broader table.  
Currently unsupported mathematical functions of one argument include expm1, 
log1p, log2, log10, cospi, sinpi, and tanpi.

While manual differentiation of these proposed additions is straight-forward, 
their absence complicates what otherwise could be much simpler, such as using 
deriv() or deriv3() to generate functions, for example to use as an nls model.

Thanks,

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Wish List: Extensions to the derivatives table

2017-02-17 Thread Jerry Lewis
The issue is that without an extensible derivative table or the proposed 
extensions, it is not possible to automatically produce (without manual 
modification of the deriv3 output) a function that avoids catastrophic 
cancellation regardless of the working range.

Manual modification is not onerous as a one-time exercise, but can be time 
consuming when it must be done numerous times, for example when evaluating the 
impact of different parameterizations on parameter effects curvature.  The 
alternative of more flexible differentiation does not seem to be a difficult 
addition to R.  In S+ (which does not have deriv3) it would simply involve 
adding the following lines to the switch statement in D

  expm1 = make.call("*", make.call("exp", expr[[2]]), D(expr[[2]], name)),
  log1p = make.call("/", D(expr[[2]], name), make.call("+", 1., expr[[2]])),
  log2 = make.call("/", make.call("/", D(expr[[2]], name), expr[[2]]), 
quote(log(2)) ),
  log10 = make.call("/", make.call("/", D(expr[[2]], name), expr[[2]]), 
quote(log(10)) ),
  cospi = make.call("*", make.call("*", make.call("sinpi", expr[[2]]), 
make.call("-", D(expr[[2]], name))), quote(pi)),
  sinpi = make.call("*", make.call("*", make.call("cospi", expr[[2]]), 
D(expr[[2]], name)), quote(pi)),
  tanpi = make.call("/", make.call("*", D(expr[[2]], name), quote(pi)), 
make.call("^", make.call("cospi", expr[[2]]), 2)),

Jerry

From: Avraham Adler [mailto:avraham.ad...@gmail.com]
Sent: Friday, February 17, 2017 4:16 PM
To: Jerry Lewis; r-devel@r-project.org
Subject: Re: [Rd] Wish List: Extensions to the derivatives table

Hi.

Unless I'm misremembering, log, exp, sin, cos, and tan are all handled in 
deriv3. The functions listed are  specially coded slightly more accurate 
versions but can be substituted with native ones for which deriv/deriv3 will 
work automatically. I believe that if you  write your functions using log(a + 
1) instead of log1p(a) or log(x) / log(2) instead of log2(x) deriv3 will work 
fine.

Thanks,

Avi

On Fri, Feb 17, 2017 at 2:02 PM Jerry Lewis 
mailto:jerry.le...@biogen.com>> wrote:
The derivative table resides in the function D.  In S+ that table is extensible 
because it is written in the S language.  R is faster but less flexible, since 
that table is programmed in C.  It would be useful if R provided a mechanism 
for extending the derivative table, or barring that, provided a broader table.  
Currently unsupported mathematical functions of one argument include expm1, 
log1p, log2, log10, cospi, sinpi, and tanpi.

While manual differentiation of these proposed additions is straight-forward, 
their absence complicates what otherwise could be much simpler, such as using 
deriv() or deriv3() to generate functions, for example to use as an nls model.

Thanks,

[[alternative HTML version deleted]]

__
R-devel@r-project.org<mailto:R-devel@r-project.org> mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel<https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Ddevel&d=DwMFaQ&c=n7UHtw8cUfEZZQ61ciL2BA&r=xs_tXUrv91YrLJDF556854t-QoeZJZaWm9FEXA9zM5g&m=-A1aEBZHHGplCfEF7yE3w1qkXptmiyra-em-DMThcAU&s=GJ4FysAkXSzYkfhgXMcAnPtGKT6syHkLAJp9JtkLVik&e=>
--
Sent from Gmail Mobile

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Wish List: Extensions to the derivatives table

2017-02-17 Thread Jerry Lewis
Thank you.  The nlsr package will be a satisfactory alternative once the bug in 
fnDeriv(..., hessian=TRUE) is patched.  I have notified the maintainer.

Jerry

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com] 
Sent: Friday, February 17, 2017 6:05 PM
To: Jerry Lewis; r-devel@r-project.org
Subject: Re: [Rd] Wish List: Extensions to the derivatives table

On 17/02/2017 1:59 PM, Jerry Lewis wrote:
> The derivative table resides in the function D.  In S+ that table is 
> extensible because it is written in the S language.  R is faster but less 
> flexible, since that table is programmed in C.  It would be useful if R 
> provided a mechanism for extending the derivative table, or barring that, 
> provided a broader table.  Currently unsupported mathematical functions of 
> one argument include expm1, log1p, log2, log10, cospi, sinpi, and tanpi.
>
> While manual differentiation of these proposed additions is straight-forward, 
> their absence complicates what otherwise could be much simpler, such as using 
> deriv() or deriv3() to generate functions, for example to use as an nls model.

The nlsr package allows you to specify derivatives.

Duncan Murdoch

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] qpois documentation (PR#13743)

2009-06-02 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.9.0
OS: Windows XP Professional
Submission from: (NULL) (166.186.168.103)


Quantiles for discrete distributions are consitently implemented, but
inconsitently documented.  Help for qpois incorrectly states in the Details
section that
  "The quantile is left continuous: qgeom(q, prob) is the largest integer x such
that P(X <= x) < q."
which disagrees with the implementation; it should read
  "The quantile is defined as the smallest value x such that F(x) >= p, where F
is the distribution function."
Also, this definition should be added to Help for qhyper.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] trigamma for very large arguments (PR#14020)

2009-10-22 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.9.2
OS: Windows XP Professional
Submission from: (NULL) (96.237.55.233)


trigamma(x) returns 0 for x>1e152, yet
  trigamma <- function(x) 1/x
gives machine accuracy for any x>1e16

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] (PR#14020) trigamma for very large arguments

2009-10-23 Thread jerry . lewis
More generally, the accuracy and working range of psigamma(x,deriv) can be 
improved by having it return the leading term of the asymptotic expansion
  (-1)^(deriv-1)*factorial(deriv-1)/x^deriv
whenever deriv>=1 and x>=1e15
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pt not monotonic with large noncentrality (PR#14069)

2009-11-18 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.10.0
OS: Windows XP Professional
Submission from: (NULL) (96.237.55.233)


pt(0,3,200)  # correctly returns 0
pt(-1000,3,200)  # erroniously returns 0.003116595

Since pt(0,df,nc) = pnorm(-nc), there is an easily computed upper bound for
pt(-t,df,nc) where t>0 and nc>0.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] dchisq tail accuracy (PR#14105)

2009-12-03 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.10.0
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


dchisq is inaccurate in the extreme tails.  For instance, dchisq(1510,2,952)
returns 4.871004e-18 which is almost 15 times smaller than the correct value of
7.053889e-17.  A better approach for ncp>0 would be
  besselI(sxn,d2,TRUE) * (x/ncp)^(d2/2) * exp(sxn-(x+ncp)/2)/2
where logs may prevent over/underflow in extreme calculations.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] dchisq tail accuracy (PR#14105)

2009-12-03 Thread jerry . lewis
The undefined variables in the original post are
   d2 <- df/2-1
   sxn <- sqrt(x*ncp)

Jerry
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] qpois errors for degenerate distribution (PR#14135)

2009-12-14 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.10.0
OS: XP Professional
Submission from: (NULL) (96.237.55.233)


For a degenerate Poisson distribution (lambda==0), qpois(p,0,lower.tail) should
return 0 for any valid p, but qpois(1,0) and qpois(0,0,F) incorrectly return
Inf.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] dchisq tail accuracy (PR#14105)

2010-01-03 Thread jerry . lewis
The issue seems to be that the infinite sum is truncated too early when x 
is in the extreme upper tail.  An easily validated improvement to to 
dnchisq.c would be to add an additional requirement in the upper tail 
while condition, that the summation should continue while the additive 
term remains too big a fraction of the saddlepoint density.  Here is 
replacement code that partially offsets the time required for addtional 
terms in the sum by calculating x*ncp2 only once instead of repeating it 
in every iteration of both loops.

x2 = x * ncp2
/* upper tail */
term = mid; df = dfmid; i = imax;
s = 0.5 - (dx+sqrt(dx*dx+4*ncp/x))*0.25  /* saddlepoint */
s2 = 1-2*s
termlimit = exp(ncp*s/s2 - log(s2)*df*0.5 - s*x - 
log((df+2*ncp+4*ncp*s/s2)/(s2*s2)*pi) - log(2)*2**54) /* saddlepoint 
density * 2^-53 */
do {
i++;
q = x2 / i / df;
df += 2;
term *= q;
sum += term;
} while (q >= 1 || term * q > (1-q)*eps || term > termlimit);
/* lower tail */
term = mid; df = dfmid; i = imax;
while ( i ){
df -= 2;
q = i * df / x2;
i--;
term *= q;
sum += term;
if (q < 1 && term * q <= (1-q)*eps) break;
}

Jerry
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] qpois Help problems (PR#14200)

2010-01-30 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.10.1
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.21)


In the line 

"The quantile is right continuous: qpois(q, lambda) is the smallest integer x
such that P(X <= x) >= q."

"q" is used as a probability when the Arguments section defines it to be a
quantile.

Also there are some representation problems where the escape character is
printed instead of interpreted, such as "\ldots" and "\lambda" in the preceding
lines.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pchisq accuracy (PR#14216)

2010-02-19 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.10.1
OS: Windows XP Professional
Submission from: (NULL) (166.186.168.21)


Since
  pchisq(x,df,ncp,lower.tail,TRUE)
is calculated as
  log(pchisq(x,df,ncp,lower.tail))
it looses accuracy when pchisq(x,df,ncp,lower.tail) is near 1.  Accuracy can be
maintained in that case by replacing the existing calculation with
  log1p(-pchisq(x,df,ncp,!lower.tail))

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] chi-squared with zero df (PR#10551)

2008-01-06 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.6.1
OS: Windows XP Professional
Submission from: (NULL) (24.147.191.250)


pchisq(0,0,ncp=lambda) returns 0 instead of exp(-lambda/2)

pchisq(x,0,ncp=lambda) returns NaN instead of exp(-lambda/2)*(1 +
SUM_{r=0}^infty ((lambda/2)^r / r!) pchisq(x, df + 2r))

qchisq(.7,0,ncp=1) returns 1.712252 instead of 0.701297103

qchisq(exp(-1/2),0,ncp=1) returns 1.238938 instead of 0

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Overly restrictive conditions to evaluate beta (PR#10763)

2008-02-15 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.6.2
OS: Windows XP Professional
Submission from: (NULL) (96.233.108.117)


Currently, beta(a,b) returns NaN if either a or b is negative, but the current
calculation 
  beta(a,b) = gamma(a)*gamma(b)/gamma(a+b)
works equally well if either or both arguments are negative, provided that none
of a, b, a+b is a negative integer.

Also, if n>m>0 are integers, then 
  B(m,-n) = (-1)^m * B(m,n-m+1)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pt inaccurate when x is close to 0 (PR#9945)

2008-02-15 Thread jerry . lewis
While I agree that the reported results from Mathematica have only 10-13 
correct digits, that does not mean that pt() in R is any better for these 
calculations.  For instance the following three calculations are 
mathematically equivalent, but pt() disagrees at the 13th figure in R 
v2.6.2
  pt(1e-4,13)
  pf(1e-8,1,13)/2+0.5
  pbeta(1/(1+13/1e-8),.5,6.5)/2+0.5
Using 1/(1+n/x^2) and reversing the shape parameters avoids the accuracy 
loss that Thomas Lumley was concerned about.

The following reference values for pt(1e-4,1:100) were computed in Maple 
to 400 figures and then rounded to 17.  To 17 figures, there was no 
difference in the results whether I used the exact value of 10^(-4) or the 
exact binary double precision approximation of 7378697629483821*2^(-66). 
Compared to these reference values, log relative errors (LRE, essentially 
the number of correct figures) for pt ranged from 10.7 to 13.4, while
  pf(1e-8,1,df)/2+0.5
and
  pbeta(1/(1+df/1e-8),.5,df/2)/2+0.5
agreed exactly with Maple, except for 21 and 90 df, where the LRE was 
15.65.  Clearly it would be quite easy to improve pt() for small values of 
x.


Jerry W. Lewis
Lead Statistician
Biogen Idec
Cambridge, MA  USA


z<-c(0.50003183098851228, 0.50003535533897094, 0.50003675525961311, 
0.5000374992188, 0.50003796066890633, 0.50003827327715657, 
0.50003849914500990, 0.50003866990202363, 0.50003880349081531, 
0.50003891083832527, 0.50003899897563476, 0.50003907263045176, 
0.50003913509812855, 0.50003918874550927, 0.50003923531621426, 
0.50003927612297732, 0.50003931217293135, 0.50003934425160310, 
0.50003937298066018, 0.50003939885850220, 0.50003942228935944, 
0.50003944360452316, 0.50003946307808180, 0.50003948093875281, 
0.50003949737889518, 0.50003951256145633, 0.50003952662538506, 
0.50003953968989187, 0.50003955185783298, 0.50003956321842127, 
0.50003957384941549, 0.50003958381890103, 0.50003959318674852, 
0.50003960200581637, 0.50003961032294799, 0.50003961817980360, 
0.50003962561355779, 0.50003963265748728, 0.50003963934146876, 
0.50003964569240221, 0.50003965173457262, 0.50003965748996017, 
0.50003966297850729, 0.50003966821834945, 0.50003967322601522, 
0.50003967801660046, 0.50003968260392016, 0.50003968700064152, 
0.50003969121840071, 0.50003969526790563, 0.50003969915902671, 
0.50003970290087718, 0.50003970650188431, 0.50003970996985278, 
0.50003971331202108, 0.50003971653511200, 0.50003971964537768, 
0.50003972264864014, 0.50003972555032763, 0.50003972835550734, 
0.50003973106891495, 0.50003973369498129, 0.50003973623785651, 
0.50003973870143192, 0.50003974108935987, 0.50003974340507184, 
0.50003974565179484, 0.50003974783256646, 0.50003974995024854, 
0.50003975200753972, 0.50003975400698688, 0.50003975595099569, 
0.50003975784184024, 0.50003975968167193, 0.50003976147252762, 
0.50003976321633717, 0.50003976491493036, 0.50003976657004330, 
0.50003976818332434, 0.50003976975633955, 0.50003977129057782, 
0.50003977278745549, 0.50003977424832079, 0.50003977567445783, 
0.50003977706709040, 0.50003977842738546, 0.50003977975645637, 
0.50003978105536601, 0.50003978232512953, 0.50003978356671702, 
0.50003978478105603, 0.50003978596903380, 0.50003978713149950, 
0.50003978826926618, 0.50003978938311274, 0.50003979047378564, 
0.50003979154200064, 0.50003979258844427, 0.50003979361377541, 
0.50003979461862660)
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] choose incorrect for fractional and some negative integer values (PR#10766)

2008-02-15 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.6.2
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


choose() returns incorrect values for all fractional arguments, regardless of
sign.  It returns 0 when both arguments are negative integers, which is not
always correct (as in some formulations of the negative hypergeometric).

Examples (correct answers from Maple's binomial function):

choose( 4.75,2.75)  # should be 8.90625
choose(-0.75,-0.5)  # should be 1.6692536833481464
choose(-1.75,-0.5)  # should be 0.55641789444938212
choose(-2.75,-1.5)  # should be 0.15897654127125204
choose(-5,   -7)# should be 15

The correct answers for first two can be verified easily since choose(n,k)
should equal
  1/beta(n-k+1,k+1)/(n+1)
but does not for fractional arguments.  The latter three are instances where
beta incorrectly returns NaN, as noted in PR#10763

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] choose incorrect for fractional and some negative integer values (PR#10766)

2008-03-19 Thread jerry . lewis
choose(-5,-7) uses integer arguments (as specified in Help)  and returns a 
numeric value that is incorrect.  Either the function or the documentation 
should be fixed.  If the function is not fixed, a warning or an error 
would be helpful.

The fact that choose(n,k) usually returns choose(n,round(k,0)) is not 
obvious from either the output or the documentation.  I suggest issuing a 
warning when k is not an integer; a user might easily expect that choose 
would coerce both arguments to integers, but only coercing k to an integer 
is unusual and potentially misleading.

The fact that choose(n,k) always rounds .5 up in k (contrary to the round 
function documentation) is nowhere documented.

Help for coerce would lead a reader to expect k to be truncated toward 
zero (as.integer), instead of rounding it.

Jerry
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wishlist: add functions to calculate noncentrality parameters (PR#11033)

2008-03-25 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.6.2
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


Some methods (sample size calculations, for instance) are based upon calculating
the noncentrality parameter needed to achieve a pre-specified value of the
noncentral cdf.  None of the distributions that support noncentrality parameters
have functions to return the noncentrality parameter given p.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] choose fails a fundamental property of binomial coefficients (PR#11035)

2008-03-26 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.7.0 (2008-03-23 r44847)
OS: Windows XP Professional
Submission from: (NULL) (71.184.230.48)


choose(n,k) = choose(n,n-k) is not satisfied if either

1. n is a negative integer with k a positive integer (due to automatically
returning 0 for n-k<0)

2. n is not an integer with k a positive integer (due to rounding n-k to an
integer, compounded by automatically returning 0 if n<0 which implies n-k<0)

Both are easily fixed, as discussed in PR# 10766

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] solve cdf for noncentrality (PR#11527)

2008-05-27 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.7.0
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


The current distribution function naming convention inherited from S (d*, p*,
q*, r* for pdf/pmf, cdf, quantile, & random numbers) is inadequate for
noncentral distributions, where there is frequently a need to solve the equation
p*(...)=p for the noncentrality parameter instead of the quantile.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Incomplete documentation for Long Input Lines (PR#13147)

2008-10-11 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.7.2
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


Section 1.8 of "An Introduction to R" states "Command lines entered at the
console are limited to about 1024 bytes (not characters)" and indicates that
incomplete lines may be continued on the next line.  There appear to be unstated
additional restrictions.  For instance, in a console entered assignment
statement using the c() function, if a comma occurs in position 1023, then that
line with its continuation on the next will not be interpreted correctly.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] solve cdf for noncentrality (PR#11527)

2008-10-11 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.7.0
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


If you are saying that there is no need to solve for the noncentrality
parameter, please justify this amazing assertion.

If you are saying that this need is already adequately addressed in R, then
please enlighten me.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] solve cdf for noncentrality (PR#11527)

2008-10-14 Thread jerry . lewis
R implementations of Student's t, chi-squared, F, and beta distributions 
all support noncentrality parameters.  There is often a need (for example 
in sample size problems) to invert the cdf to obtain the noncentrality 
parameter given the quantile, instead of to obtain the quantile given the 
noncentrality (as the q... functions do).  For example, in SAS, these 
functions are called TNONCT, CNONCT, and FNONCT.

I believe that 2.7.0 was the current version when I submitted this on 27 
May, 2008.  If it has been subsequently addressed, I can find no mention 
of it either in the 2.7.2 Reference Manual or in today's 
https://svn.r-project.org/R/trunk/NEWS

Jerry

Uwe Ligges <[EMAIL PROTECTED]> wrote
>
> [EMAIL PROTECTED] wrote:
> > Full_Name: Jerry W. Lewis
> > Version: 2.7.0
> > OS: Windows XP Professional
> > Submission from: (NULL) (198.180.131.16)
> >
> >
> > If you are saying that there is no need to solve for the noncentrality
> > parameter, please justify this amazing assertion.
>
> - He? What noncentrality are you talking about?
> - You are reporting a bug with an outdated version of R.
> - You have not followed the guide on how to report bugs in R.
>
> Uwe Ligges
>
>
> > If you are saying that this need is already adequately addressed in R, 
then
> > please enlighten me.
[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] erf calculation (PR#13271)

2008-11-11 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.8.0
OS: Windows XP Professional
Submission from: (NULL) (71.184.139.210)


On p.1202 of the Reference manual, calculating erf(x) is given as an example
using the code

erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1

A numerically better (avoiding cancellation for x near 0) formula is

erf <- function(x){
  ret <- pgamma(x^2,0.5,1)
  ret[x<0] <- -ret[x<0]
  ret
}

It would also be convenient if erf and erfc functions were provided in R, as
they are in S-PLUS.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] besselI inaccurate for negative integer order (PR#13556)

2009-02-26 Thread Jerry . Lewis
Full_Name: Jerry W. Lewis
Version: 2.8.1
OS: Windows XP Professional
Submission from: (NULL) (198.180.131.16)


It should be the case that
  besselI(x,-nu) == besselI(x,nu) == besselI(x,abs(nu))
for integer nu, yet R currently can return ridiculous values when nu is a
negative integer.

For instance, besselI(9.6,-44) returns -234626490 instead of the correct value
of 5.9041042646307223e-25, while besselI(9.6,44) gives essentially machine
accuracy.

This is more than an idle mathematical curiosity, since one consequence is that
dskellam in the VGAM package can return values <0 or >1.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel