Oscar Benjamin added the comment:
On 19 August 2013 17:35, Steven D'Aprano <[email protected]> wrote:
>
> Steven D'Aprano added the comment:
>
> On 19/08/13 23:15, Oscar Benjamin wrote:
>>
>> The final result is not accurate to 2 d.p. rounded down. This is
>> because the decimal context has affected all intermediate computations
>> not just the final result.
>
> Yes. But that's the whole point of setting the context to always round down.
> If summation didn't always round down, it would be a bug.
If individual binary summation (d1 + d2) didn't round down then that
would be a bug.
> If you set the precision to a higher value, you can avoid the need for
> compensated summation. I'm not prepared to pick and choose which contexts
> I'll honour. If I honour those with a high precision, I'll honour those with
> a low precision too. I'm not going to check the context, and if it is "too
> low" (according to whom?) set it higher.
I often write functions like this:
def compute_stuff(x):
with localcontext() as ctx:
ctx.prec +=2
y = ... # Compute in higher precision
return +y # __pos__ reverts to the default precision
The final result is rounded according to the default context but the
intermediate computation is performed in such a way that the final
result is (hopefully) correct within its context. I'm not proposing
that you do that, just that you don't commit to respecting inaccurate
results.
>>Why would anyone prefer this behaviour over
>> an implementation that could compensate for rounding errors and return
>> a more accurate result?
>
> Because that's what the Decimal standard requires (as I understand it), and
> besides you might be trying to match calculations on some machine with a
> lower precision, or different rounding modes. Say, a pocket calculator, or a
> Cray, or something. Or demonstrating why rounding matters.
No that's not what the Decimal standard requires. Okay I haven't fully
read it but I am familiar with these standards and I've read a good
bit of IEEE-754. The standard places constrainst on low-level
arithmetic operations that you as an implementer of high-level
algorithms can use to ensure that your code is accurate.
Following your reasoning above I should say that math.fsum and your
statistics.sum are both in violation of IEEE-754 since
fsum([a, b, c, d, e])
is not equivalent to
((((a+b)+c)+d)+e)
under the current rounding scheme. They are not in violation of the
standard: both functions use the guarantees of the standard to
guarantee their own accuracy. Both go to some lengths to avoid
producing output with the rounding errors that sum() would produce.
> I think the current behaviour is the right thing to do, but I appreciate the
> points you raise. I'd love to hear from someone who understands the Decimal
> module better than I do and can confirm that the current behaviour is in the
> spirit of the Decimal module.
I use the Decimal module for multi-precision real arithmetic. That may
not be the typical use-case but to me Decimal is a floating point type
just like float. Precisely the same reasoning that leads to fsum
applies to Decimal just as it does to float
(BTW I've posted on Rayomnd Hettinger's recipe a modification that
might make it work for Decimal but no reply yet.)
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue18606>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com