https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79487
--- Comment #15 from Jakub Jelinek <jakub at gcc dot gnu.org> --- (In reply to Dominik Vogt from comment #14) > To me, it looks like the same bug does not happen with float just because > there is no need to convert this to 64 bit format for the comparison. > simplify_const_unary_operation is not executed - if it was the same would > have happened for the float to double conversion. No, the thing is, for non-decimal constants, the REAL_CSTs already contain the right rounded values for their type. If that is not the case with decimal floats, something is wrong. Consider that you say convert 10 different unsigned long long values to 10 different _Decimal32 variables and then add them all (as _Decimal32) up. Where is the rounding to _Decimal32 after each such addition performed? For float, this happens on each of the integer to float conversions (when REAL_CST is created) and then on each of the additions. Apparently decimal REAL_CSTs are all _Decimal128 precision. The only function that does something in a different precision is decimal_round_for_format and that doesn't seem to be called after each decimal_real_operation, or during decimal_from_integer etc.