http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48047

           Summary: Incorrect output rounding of double precision numbers
           Product: gcc
           Version: 4.6.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: libfortran
        AssignedTo: unassig...@gcc.gnu.org
        ReportedBy: thenl...@users.sourceforge.net


Created attachment 23603
  --> http://gcc.gnu.org/bugzilla/attachment.cgi?id=23603
Test case

The Fortran library does not round real(8) numbers correctly on output if 39
decimal digits are requested and real(16) is supported. This violates IEEE Std
754-2008 which demands that the minimum of supported significant digits for
correct rounding of all supported binary formats (H) is at least H=39 if the
binary128 format is supported.

Thus, GCC misses this requirement by 1 digit.

The attached program fails because the exact value, as given by
quadmath_snprintf(..., (__float128)0.14285714285714285) is
0.142857142857142849212692681248881854116916656494...

IEEE Std 754-2008 says:
===
5.12.2 External decimal character sequences representing finite numbers   
...
For the purposes of discussing the limits on correctly rounded conversion,
define the following quantities:
...
- for binary128, Pmin (binary128) = 36
...
- M = max(Pmin(bf)) for all supported binary formats bf
...

There might be an implementation-defined limit on the number of significant
digits that can be converted with correct rounding to and from supported binary
formats. That limit, H, shall be such that H >= M + 3 and it should be that H
is unbounded.

For all supported binary formats the conversion operations shall support
correctly rounded conversions to or from external character sequences for all
significant digit counts from 1 through H (that is, for all
expressible counts if H is unbounded).
...
===

Reply via email to