http://gcc.gnu.org/bugzilla/show_bug.cgi?id=50724

Ethan Tira-Thompson <ejtttje at gmail dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
         Resolution|INVALID                     |

--- Comment #17 from Ethan Tira-Thompson <ejtttje at gmail dot com> 2011-10-17 
04:12:31 UTC ---
Richard said:
> math.h is not part of GCC

But the point is there is value in consistency between math.h and cmath
regardless of who owns math.h.  I'm asserting that this value is greater than
that gained by 'optimizing away' the classification functions in cmath. 
Inconsistency leads to confused users and therefore bugs, missed optimization
is only going to cause slower code.  I get that you want to make the most of
-ffast-math, and if it were a big speedup it could be worthwhile, but it seems
reasonable that if someone is serious about optimizing away their
classification sanity checks in a release build, they would be better served by
using assert() or an #ifdef instead of relying of the vagaries of -ffast-math
for this purpose.

> The only way out I see that not breaks other users uses would be a new
> flag, like -fpreserve-ieee-fp-classification that, ontop of 
> -ffinite-math-only,

I'm not opposed to a new flag, but I'd suggest the reverse semantics. 
Disabling classification is an extra degree of non-compliance beyond ignoring
non-finite math operations.  I'd rather users add flags to progressively become
less compliant, rather than add a flag to get some compliance back.

But to rewind a second, I want to address the "breaks other users" comment...
here is the status AFAIK:
1) Older versions (4.1, 4.2) of gcc did not do this optimization of
classification functions.  Thus, "legacy" code expects classification to work
even in the face of -ffast-math, which was changed circa 4.3/4.4
2) Removing classification 'breaks' code because it fundamentally strips
execution paths which may otherwise be used.
3) Leaving classification in could be considered a missed optimization, but is
at worst only a performance penalty, not a change in execution values.
4) Personal conjecture: I doubt the classification routines are a performance
bottleneck in the areas where -ff-m-o is being applied, so (3) is pretty
minimal.  And I seriously doubt anyone is relying on the removal of
classification in a code-correctness context, that doesn't make any sense.

Are we on the same page with these points?  So if you are concerned with the
breakage of existing code, isn't the solution to revert to the previous scope
of the -ff-m-o optimization ASAP, and then if there is a desire to extend the
finite-only optimization to classification functions, *that* would be a new
feature request, perhaps with its own flag?

> (Note that they are folded to arithmetic, !(x==x), so that transform
> has to be disabled as well, and on some architectures you might get
> library calls because of this instead of inline expansions).

I'd rather leave comparison optimizations as they are under -ff-m-o, that seems
a sensible expectation of the 'arithmetic' scope, and is relatively well-known,
long-standing effect of -ffast-math.  It's only the 5 explicit fp
classification calls which I think deserve protection to allow data validation
in a non-hacky manner before doing core computations with the finite invariant.

Unless you are saying things like std::isnan cannot be implemented separately
from !(x==x)?  That has not been my understanding.

Reply via email to