https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116398

--- Comment #22 from Segher Boessenkool <segher at gcc dot gnu.org> ---
(In reply to Richard Sandiford from comment #18)
> I'd been reluctant to get involved in this for fear of creating friction or
> being a cook too many,

No, your input is much appreciated!

> but: the problem in PR101523 was that, after each
> successful 2->2 attempt, distribute_links would search further and further
> for the new next combinable use of the i2 destination.

And the way it searches is linear in time, yeah.  Making that better will
require keeping some extra data structure up to date throughout combine, not
something that will likely be a great success.  Or do you see some other
possibility?

> So rather than
> adding a LUID-based limit to try_combine, how about just limiting the
> distribute_links to a certain number of instructions when i2 is unchanged?

That does not sound too bad.  Heck, even limiting it *always* could be done!

> The attached proof-of-concept hack does that using the upper limit of 1200
> that Jakub mentioned in comment 10.  It also includes a variant of
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523#c53 .  I tried it on
> g:839bc42772ba7af66af3bd16efed4a69511312ae~ for the original testcase in
> PR101523 and it reduced combine to ~3% of compilation.  Still more than 0%
> of course, but nevertheless much less than before.

3% on a testcase that was meant to expose something closer to 100% isn't so
bad ;-)

> Does this seem like a plausible approach?  I'm going to be away for the next
> couple of weeks so wouldn't be able to take it further for a while.

Do you see time to still post the patch today?  If not, I'll pick it up.

Thanks!

Reply via email to