https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119131

--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
But e.g. 0.0e-12df shouldn't be treated like that.
As can be seen on
_Decimal32 a = 0.0e-12df;
_Decimal32 b = 0.0e-98df;
_Decimal32 c = 0.0e-99df;
_Decimal32 d = 0.0e-100df;
_Decimal32 e = 0.0e-101df;
_Decimal32 f = 0.0e+89df;
_Decimal32 g = 0.0e+90df;
_Decimal32 h = 0.0e+91df;
_Decimal32 i = 0.0e+92df;
_Decimal32 j = 0.0e+93df;
d and e already are represented as 32-bit zero (and anything smaller than
that),
while i and j are equal to h in value.
So, for decimal it is best for mantissa 0 (at least if sign is not negative) to
check if it reinterprets as integral zero.

Reply via email to