http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40766

Dominique d'Humieres <dominiq at lps dot ens.fr> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|WAITING                     |NEW
          Component|fortran                     |tree-optimization

--- Comment #24 from Dominique d'Humieres <dominiq at lps dot ens.fr> ---
On a Intel(R) Xeon(R) CPU E5640 @ 2.67GHz under
Linux version 2.6.43.8-1.fc15.x86_64 (mockbu...@x86-02.phx2.fedoraproject.org)
(gcc version 4.6.3 20120306 (Red Hat 4.6.3-2) (GCC) ) #1 SMP Mon Jun 4 20:33:44
UTC 2012, the timing for the test in comment #0 compiled with gfortran 4.6.3 is

[delta] test/fortran$ time a.out
  4.28173363E+09
1441.326u 0.035s 24:04.84 99.7%    0+0k 0+0io 0pf+0w

Note that a real(8) version gives

[delta] test/fortran$ time a.out
   696899672.37568963     
184.465u 0.067s 3:05.04 99.7%    0+0k 400+0io 3pf+0w

without optimization,

   696899672.37569129     
131.957u 0.104s 2:12.39 99.7%    0+0k 0+0io 0pf+0w

with -O3 and

   696899672.37550783     
136.051u 0.066s 2:16.43 99.7%    0+0k 0+0io 0pf+0w

with -O3 -ffast-math. I don't have access to more recent fedora and/or gcc
versions, so I don't know if the miserable performances of some linux
transcendental floats have been finally fixed or not (the accuracy argument is
ridiculous: what is the point to get the 23rd bit exact when you cat get more
than 50 for 10 time less?). On this count this PR should be fixed as INVALID.

However, I have found a strange behavior with optimization when repeating the
test on a 2.5Ghz Core2Duo under x86_64-apple-darwin10:

-O0             76.1s
-O0 -ftree-ter  62.5s
-O1            103.0s
-O2            112.9s
-O3            112.9s
-Ofast         115.2s

Can someone explain what's going on?

Reply via email to