https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85957

--- Comment #24 from joseph at codesourcery dot com <joseph at codesourcery dot 
com> ---
On Tue, 11 Feb 2020, ch3root at openwall dot com wrote:

> So, yeah, it seems integers have to be stable. OTOH, now that there is sse and
> there is -fexcess-precision=standard floating-point values are mostly stable
> too. Perhaps various optimizations done for integers could be enabled for FPs
> too? Or the situation is more complicated?

Well, 0.0 == -0.0 but it's not valid to substitute one for the other (and 
similarly with decimal quantum exponents), for example, so floating-point 
certainly has different rules for what's valid in this area.

I think fewer and fewer people care about x87 floating point nowadays; 
32-bit libraries are primarily for running old binaries, not new code.  
So x87 excess precision issues other than maybe the ABI ones for excess 
precision returns from standard library functions will become irrelevant 
in practice as people build 32-bit libraries with SSE (cf. 
<https://fedoraproject.org/wiki/Changes/Update_i686_architectural_baseline_to_include_SSE2>),
 
and even the ABI ones will disappear in the context of builds with SSE as 
the remaining float and double not-bound-to-IEEE754-operations glibc libm 
functions with .S implementations move to C implementations once there are 
suitably optimized C implementations that prove faster in benchmarking.  
I'd encourage people who care about reliability with floating point on 
32-bit x86 to do that benchmarking work to justify such removals of 
x86-specific assembly.

However, if you want to fix such issues in GCC, it might be plausible to 
force the standard-conforming excess precision handling to always-on for 
x87 floating point (maybe except for the part relating to constants, since 
that seems to confuse users more).  There would still be the question of 
what to do with -mfpmath=sse+387.

Reply via email to