https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746

--- Comment #4 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> ---
I actually find it more confusing the fact that constants are not evaluated in
extended precision while everything else is evaluated in extended precision.
The real solution to avoid confusion would be to change the behavior so that
FLT_EVAL_METHOD = 0 by default; if users see an effect on the performance
(which may not be the case for applications that do not use floating-point
types very much), they could still use an option to revert to FLT_EVAL_METHOD =
2 (if SSE is not available), in which case they should be aware of the
consequences and would no longer be confused by the results.

But in addition to the confusion, there is the accuracy issue with the current
behavior.

Reply via email to