https://gcc.gnu.org/bugzilla/show_bug.cgi?id=38785

--- Comment #50 from Jorn Wolfgang Rennecke <amylaar at gcc dot gnu.org> ---
It certainly is the case that the merit of an optimization can often not be
evaluated until forther optimization passes are done.  In fact, as an assembly
programmer, evaluating potential alternative code transformations, and
selecting the most suitable, or backtracking altogether, are a common modus
operandi.
Where pre creates a lot of new phi-nodes, in the hope that subsequently there
will be a commensurate pay-off, this should be evaluated at a later point down
the chain of optimization passes, either on a per-function, or on a
per-SESE-region basis.
In obvious cases, it might be enough you have a certain number of deletions of
code / phi nodes nodes to phi nodes previously created, or of overall cost
decrease for the function / SESE region, while in more complicated cases (or
just because you choose a higher optimization level),
you want to actually compare the code with and without the aggressive pre
optimization, or compare various levels of aggressiveness of pre optimizations.
We have long limited GCC to only follow a static pass phasing and doing
decisions one at a time, not to be reconsidered, but maybe undone by a
subsequent pass, if possible and deemed suitable at the time then.
As long as we don't allow GCC to consider doing alternative transformations,
and backtracking, it will be forever be limited.
I wonder if people would consider to use an operating-system dependent
operation - namely fork - to get the ball rolling.  I am aware that we'd
eventually need a further pointer abstraction for cross-pass persistent memory
to support compiler instance duplication on systems that can't fork,
and with GTY and C++ copy constructors we should be half-way there, but I think
we should first explore what we can do with compiler instance duplication on
systems where we can have it essentially for free.

Reply via email to