http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48906

--- Comment #34 from Jerry DeLisle <jvdelisle at gcc dot gnu.org> 2011-06-10 
16:22:20 UTC ---
Additional note:  The standard states:

"Let N be the magnitude of the internal value"

The internal value is to be used to determine the conversion to F formatting. I
think this adds to my point.

I wonder if the standards committee knew the thresholds selected do not have an
exact binary representation so that the internal values will always be above or
below it.

Reply via email to