https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116398

--- Comment #18 from Richard Sandiford <rsandifo at gcc dot gnu.org> ---
Created attachment 60754
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=60754&action=edit
Proof of concept patch with hard-coded limit

I'd been reluctant to get involved in this for fear of creating friction or
being a cook too many, but: the problem in PR101523 was that, after each
successful 2->2 attempt, distribute_links would search further and further for
the new next combinable use of the i2 destination.  So rather than adding a
LUID-based limit to try_combine, how about just limiting the distribute_links
to a certain number of instructions when i2 is unchanged?

The attached proof-of-concept hack does that using the upper limit of 1200 that
Jakub mentioned in comment 10.  It also includes a variant of
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523#c53 .  I tried it on
g:839bc42772ba7af66af3bd16efed4a69511312ae~ for the original testcase in
PR101523 and it reduced combine to ~3% of compilation.  Still more than 0% of
course, but nevertheless much less than before.

Does this seem like a plausible approach?  I'm going to be away for the next
couple of weeks so wouldn't be able to take it further for a while.

Reply via email to