https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #44 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> ---
(In reply to Alexander Cherepanov from comment #43)
> GCC on x86-64 uses the binary encoding for the significand.

In general, yes. This includes the 32-bit ABI under Linux. But it seems to be
different under MS-Windows, at least with MinGW using the 32-bit ABI: according
to my tests of MPFR,

MPFR config.status 4.1.0-dev
configured by ./configure, generated by GNU Autoconf 2.69,
  with options "'--host=i686-w64-mingw32' '--disable-shared'
'--with-gmp=/usr/local/gmp-6.1.2-mingw32' '--enable-assert=full'
'--enable-thread-safe' 'host_alias=i686-w64-mingw32'"
[...]
CC='i686-w64-mingw32-gcc'
[...]
[tversion] Compiler: GCC 8.3-win32 20191201
[...]
[tversion] TLS = yes, float128 = yes, decimal = yes (DPD), GMP internals = no

i.e. GCC uses DPD instead of the usual BID.

> So the first question: does any platform (that gcc supports) use the decimal
> encoding for the significand (aka densely packed decimal encoding)?

DPD is also used on PowerPC (at least the 64-bit ABI), as these processors now
have hardware decimal support.

> Then, the rules about (non)propagation of some encodings blur the boundary
> between values and representations in C. In particular this means that
> different encodings are _not_ equivalent. Take for example the optimization
> `x == C ? C + 0 : x` -> `x` for a constant C that is the unique member of
> its cohort and that has non-canonical encodings (C is an infinity according
> to the above analysis). Not sure about encoding of literals but the result
> of addition `C + 0` is required to have canonical encoding. If `x` has
> non-canonical encoding then the optimization is invalid.

In C, it is valid to choose any possible encoding. Concerning the IEEE 754
conformance, this depends on the bindings. But IEEE 754 does not define the
ternary operator. It depends whether C considers encodings before or possibly
after optimizations (in the C specification, this does not matter, but when
IEEE 754 is taken into account, there may be more restrictions).

> While at it, convertFormat is required to return canonical encodings, so
> after `_Decimal32 x = ..., y = (_Decimal32)(_Decimal64)x;` `y` has to have
> canonical encoding? But these casts are nop in gcc now.

A question is whether casts are regarded as explicit convertFormat operations
or whether simplification is allowed as it does not affect the value, in which
case the canonicalize() function would be needed here. And in any case, when FP
contraction is enabled, I suppose that (_Decimal32)(_Decimal64)x can be
regarded as x.

Reply via email to