http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48511

Janne Blomqvist <jb at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2011.04.09 21:19:09
                 CC|                            |jb at gcc dot gnu.org
     Ever Confirmed|0                           |1

--- Comment #3 from Janne Blomqvist <jb at gcc dot gnu.org> 2011-04-09 21:19:09 
UTC ---
Does any of the Fortran edit descriptors require, or for that matter allow,
this kind of "shortest decimal representation" output? If not, the one place
where I see this could be useful if we decide to change list formatted output
to always use the shortest field width for variables, as some other compilers
do.

However, the output part is only half of the puzzle; if we do this we must make
sure that the input routines are able to convert the shortest decimal
representation into the correct binary representation.

FWIW, rather than the Steele & White algorithm or the Burger one, most actual
uses seems to use David Gay's implementation for performance reasons. AFAIK
glibc uses code based on Gay's, and also libquadmath in turn uses code based on
glibc. So maybe we can find something there. There's also
libjava/classpath/native/fdlibm/dtoa.c .

For some discussion about issues with this kind of conversions, see e.g.
http://bugs.python.org/issue1580

Reply via email to