mal("1.1001");
ends up with a having an object with the value you see in the
string, and for it to be otherwise one would have to write:
BigDecimal a = new BigDecimal("1.1001", context);
which gives a very nice clue that something may happen to the
value. This, to me, seems a clean design.
So why does my specification appear to say something different?
---
Both the languages described so far support arbitrary-length
decimal numbers. Over the past five years, however, I have been
concentrating more on fixed-length decimals, as in languages such
as C# and as will be in C and C++ and in hardware.
When the representation of a decimal number has a fixed length,
then the nice clean model of a one-to-one mapping of a literal to
the internal representation is no longer always possible. For
example, the IEEE 754r proposed decimal32 format can represent a
maximum of 7 decimal digits in the significand. Hence, the
assignment:
decimal32 d = 1.1001;
(in some hypothetical C-like language) cannot result in d having
the value shown in the literal. This is the point where language
history or precedent comes in: some languages might quietly round
at this point, others might give a compile-time warning or error
(my preference, at least for decimal types). Similar concerns
apply when the conversion to internal form causes overflow or
underflow.
The wording in the specification was intended to allow for these
kinds of behaviors, and to allow for explicit rounding, using a
context, when a string is converted to some internal
representation. It was not intended to restrict the behavior of
(for example) the Java constructor: one might consider that
constructor to be working with an implied context which has an
infinite precision. In other words, the specification does not
attempt to define where the context comes from, as this would seem
to be language-dependent. In Java, any use of a programmer-
supplied context is explicit and visible, and if none is supplied
then the implied context has UNLIMITED precision. In Rexx, the
context is always implicit -- but there is no 'conversion from
string to number' because numbers _are_ strings.
So what should Python do?
-
Since your Decimal class has the ability to preserve the value of
a literal exactly, my recommendation is that that should be the
default behavior. Changing the value supplied as a literal
without some explicit syntax is likely to surprise, given the
knowledge that there are no length restrictions in the class.
My view is that only in the case of a fixed-length destination
or an explicit context might such a rounding be appropriate.
Given that Python has the concept of an implicit decimal context,
I can see why Tim can argue that the implicit context is the
context which applies for a constructor. However, perhaps you can
define that the implicit context 'only applies to arithmetic
operations', or some such definition, much as in Rexx?
And I should clarify my specification to make it clear that
preserving the value, if possible, is preferable to rounding --
any suggestions for wording?
Mike Cowlishaw
[I'll try and follow this thread in the mailing list for a while,
but I am flying to the USA on Monday so my e-mail access will be
erratic for the next week or so.]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com