https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #43 from Alexander Cherepanov <ch3root at openwall dot com> ---
Joseph, Vincent, thanks a lot for the crash course in decimal floating-point.
Indeed, quite interesting types. Findings so far: bug 94035, comment 5, bug
94111, bug 94122.

Let me try to summarize the understanding and to get closer to the C
terminology. Please correct me if I'm wrong.

Different representations in the IEEE 754-2019 speak are different values in
the C speak, e.g. 1.0DF and 1.00DF are different values in _Decimal32 (in
particular, the assignment operator is required to preserve the difference).
The set of values corresponding to the same number is a cohort. Cohorts of
non-zero non-inf non-nan values in _Decimal32 have from 1 to 7 elements. Both
infinities have only 1 element in their cohorts. Both zeros have much more
elements in their cohorts (with all possible quantum exponents -- 192 for
_Decimal32).

Some values admit several representations (in the C speak; it's encodings in
the IEEE speak). GCC on x86-64 uses the binary encoding for the significand.
Hence, non-zero non-inf non-nan values have only one representation each.
Significands exceeding the maximum are treated as zero which gives many
(non-canonical) representations for each of many zero values. Inf and nan
values have many representations too (due to ignored trailing exponent and/or
significand).

So the first question: does any platform (that gcc supports) use the decimal
encoding for the significand (aka densely packed decimal encoding)?

Then, the rules about (non)propagation of some encodings blur the boundary
between values and representations in C. In particular this means that
different encodings are _not_ equivalent. Take for example the optimization `x
== C ? C + 0 : x` -> `x` for a constant C that is the unique member of its
cohort and that has non-canonical encodings (C is an infinity according to the
above analysis). Not sure about encoding of literals but the result of addition
`C + 0` is required to have canonical encoding. If `x` has non-canonical
encoding then the optimization is invalid.

While at it, convertFormat is required to return canonical encodings, so after
`_Decimal32 x = ..., y = (_Decimal32)(_Decimal64)x;` `y` has to have canonical
encoding? But these casts are nop in gcc now.

Reply via email to