>>>>> "Duncan" == Duncan Murdoch <[EMAIL PROTECTED]> >>>>> on Wed, 22 Mar 2006 07:40:11 -0500 writes:
Duncan> On 3/22/2006 3:52 AM, [EMAIL PROTECTED] Duncan> wrote: >>>>>>> "cspark" == cspark <[EMAIL PROTECTED]> on Wed, 22 >>>>>>> Mar 2006 05:52:13 +0100 (CET) writes: >> cspark> Full_Name: Chanseok Park Version: R 2.2.1 OS: RedHat cspark> EL4 Submission from: (NULL) (130.127.112.89) >> cspark> pbinom(any negative value, size, prob) should be cspark> zero. But I got the following results. I mean, if cspark> a negative value is close to zero, then pbinom() cspark> calculate pbinom(0, size, prob). >> >> pbinom( -2.220446e-22, 3,.1) [1] 0.729 >> pbinom( >> -2.220446e-8, 3,.1) [1] 0.729 >> pbinom( -2.220446e-7, >> 3,.1) [1] 0 >> >> Yes, all the [dp]* functions which are discrete with mass >> on the integers only, do *round* their 'x' to integers. >> >> I could well argue that the current behavior is *not* a >> bug, since we do treat "x close to integer" as integer, >> and hence pbinom(eps, size, prob) with eps "very close to >> 0" should give pbinom(0, size, prob) as it now does. >> >> However, for esthetical reasons, I agree that we should >> test for "< 0" first (and give 0 then) and only round >> otherwise. I'll change this for R-devel (i.e. R 2.3.0 in >> about a month). >> cspark> dbinom() also behaves similarly. >> yes, similarly, but differently. I have changed it (for >> R-devel) as well, to behave the same as others d*() , >> e.g., dpois(), dnbinom() do. Duncan> Martin, your description makes it sound as though Duncan> dbinom(0.3, size, prob) would give the same answer Duncan> as dbinom(0, size, prob), whereas it actually gives Duncan> 0 with a warning, as documented in ?dbinom. The d* Duncan> functions only round near-integers to integers, Duncan> where it looks as though near means within 1E-7. That's correct. Above, I did not describe what happens for the d*() functions but said that dbinom() behaves differently than pbinom and that I have changed dbinom() to behave similarly to dnbinom(), dgeom(),.... Duncan> The p* functions round near integers to integers, Duncan> and truncate others to the integer below. Duncan> I suppose the reason for this behaviour is to Duncan> protect against rounding error giving nonsense Duncan> results; I'm not sure that's a great idea, I agree that it may not seem such a great idea; but that has been discussed and decided (IIRC against my preference) quite a while ago, and I don't think it is worthwhile to rediscuss such relatively fundamental behavior every few years.. Duncan> but if we do it, should we really be handling 0 Duncan> differently? yes: - only around 0, small absolute deviations are large relative deviations - 0 is the left border of the function's domain, where one would expect strict mathematical behavior more strongly. Martin Maechler ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel