https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113080

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|                            |2023-12-19
     Ever confirmed|0                           |1
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot 
gnu.org
             Status|UNCONFIRMED                 |ASSIGNED

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
Confirmed.  We're considering the final value replacement of the 't' reduction
expensive.  It's

((b_lsm.10_8 + a_lsm.9_9) + t_10(D)) + (b_lsm.10_8 + a_lsm.9_9) * 99

and since there's a shared tree (b_lsm.10_8 + a_lsm.9_9) which we'd
duplicate (materializing as GIMPLE fails to immediately CSE).

bool
expression_expensive_p (tree expr, bool *cond_overflow_p)
{
  hash_map<tree, uint64_t> cache;
  uint64_t expanded_size = 0;
  *cond_overflow_p = false;
  return (expression_expensive_p (expr, cond_overflow_p, cache, expanded_size)
          || expanded_size > cache.elements ());
}

where expanded_size is 5 but cache.elements () is 4 (we grow because
of unsharing).

Allowing a little bit of unsharing fixes this.  Even better would be of
course an unsharing mechanism that would "save" the shared parts to
a gimple sequence (we need to unshare because gimplification is destructive
and the SCEV result contains trees that can be part of the SCEV cache
which we may not alter).

Reply via email to