https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82567
Thomas Koenig <tkoenig at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |tkoenig at gcc dot gnu.org
Component|fortran |middle-end
Target Milestone|--- |7.3
--- Comment #3 from Thomas Koenig <tkoenig at gcc dot gnu.org> ---
Some time reports:
$ gfortran -ffrontend-optimize -ftime-report pr82567.f90
Execution times (seconds)
phase parsing : 0.18 ( 5%) usr 0.01 ( 8%) sys 0.19 ( 5%) wall
5542 kB (11%) ggc
phase opt and generate : 3.55 (95%) usr 0.10 (83%) sys 3.66 (95%) wall
43204 kB (88%) ggc
$ gfortran -O -ftime-report pr82567.f90
Execution times (seconds)
phase parsing : 0.20 ( 1%) usr 0.01 (11%) sys 0.21 ( 1%) wall
5543 kB (15%) ggc
phase opt and generate : 26.96 (99%) usr 0.08 (89%) sys 27.07 (99%) wall
30881 kB (84%) ggc
The problem seems to be that we generate a load of statements like
(*(real(kind=4)[10000] * restrict) atmp.1.data)[0] = NON_LVALUE_EXPR
<__var_1_constr>;
(*(real(kind=4)[10000] * restrict) atmp.1.data)[1] = __var_1_constr *
2.0e+0;
(*(real(kind=4)[10000] * restrict) atmp.1.data)[2] = __var_1_constr *
3.0e+0;
(*(real(kind=4)[10000] * restrict) atmp.1.data)[3] = __var_1_constr *
4.0e+0;
for the test case as written below.
For the other versions, we actually do not convert the array descriptor.