On 10/6/19 5:14 PM, Thomas Koenig wrote:

Hi Tamar,

In general our approach is to identify areas for improvement in a benchmark and provide a testcase that's independent of the benchmark when reporting it in a PR upstream.

Sounds like a good approach, in principle.

If the people who are doing the identfying know Fortran well, that would
work even better (do they?), and if they could be persuaded to work on
gfortran directly, that would probably be best.

Of the 7 benchmarks that are (partly) written in Fortran, Cactus is free software (LGPL'd) and the 3 geological ones (wrf, cam4 and roms) are "obtainable" (need to register to get the source code). Of course, that means you get "a" version of the code, not necessarily what is in the SPEC benchmark, but at least it enables us to join in the analysis.

exchange2 was written by Michael Metcalf, of countless Fortran books, whom I met once (when I was on the Fortran Standardization Committee). He might be persuaded to give us a copy for analysis if this really is an outlier in performance.

Kind regards,

--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news

Reply via email to