On 6/2/2014 3:00 AM, Andrew Pinski wrote:
On Sun, Jun 1, 2014 at 11:09 PM, Janne Blomqvist
<blomqvist.ja...@gmail.com> wrote:
On Sun, Jun 1, 2014 at 9:52 AM, Mike Izbicki <mike.izbi...@gmail.com> wrote:
I'm trying to copy gcc's behavior with the -ffast-math compiler flag
into haskell's ghc compiler. The only documentation I can find about
it is at:
https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
I understand how floating point operations work and have come up with
a reasonable list of optimizations to perform. But I doubt it is
exhaustive.
My question is: where can I find all the gory details about what gcc
will do with this flag? I'm perfectly willing to look at source code
if that's what it takes.
In addition to the official documentation, a nice overview is at
https://gcc.gnu.org/wiki/FloatingPointMath
Useful, thanks for the pointer
Though for the gory details and authoritative answers I suppose you'd
have to look into the source code.
Also, are there any optimizations that you wish -ffast-math could
perform, but for various architectural reasons they don't fit into
gcc?
There are of course a (nearly endless?) list of optimizations that
could be done but aren't (lack of manpower, impractical, whatnot). I'm
not sure there are any interesting optimizations that would be
dependent on loosening -ffast-math further?
I find it difficult to remember how to reconcile differing treatments by
gcc and gfortran under -ffast-math; in particular, with respect to
-fprotect-parens and -freciprocal-math. The latter appears to comply
with Fortran standard.
(One thing I wish wouldn't be included in -ffast-math is
-fcx-limited-range; the naive complex division algorithm can easily
lead to comically poor results.)
Which is kinda interesting because the Google folks have been trying
to turn on -fcx-limited-range for C++ a few times now.
Intel tried to add -complex-limited-range as a default under -fp-model
fast=1 but that was shown to be unsatisfactory.
Now, with the introduction of omp simd directives and pragmas, we have
disagreement among various compilers on the relative roles of the
directives and the fast-math options.
I've submitted PR60117 hoping to get some insight on whether omp simd
should disable optimizations otherwise performed by -ffast-math.
Intel made the directives over-ride the compiler line fast (or
"no-fast") settings locally, so that complex-limited-range might be in
effect inside the scope of the directive (no matter whether you want
it). They made changes in the current beta compiler, so it's no longer
practical to set standard-compliant options but discard them by pragma
in individual for loops.
--
Tim Prince