https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83518

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot 
gnu.org

--- Comment #7 from Richard Biener <rguenth at gcc dot gnu.org> ---
Store-merging now merges

  arr[0] = 3;
  arr[1] = 2;
  arr[2] = 1;
  arr[3] = 5;
  vect__2.9_44 = MEM <vector(4) int> [(int *)&arr];

into

  MEM <unsigned long> [(int *)&arr] = 8589934595;
  MEM <unsigned long> [(int *)&arr + 8B] = 21474836481;
  vect__2.9_44 = MEM <vector(4) int> [(int *)&arr];

but that wouldn't help VN either.  We can brute-force this for vector
and complex loads; just lookup each component separately, combining
the results but that is expensive given lookup order doens't necessarily
match stmt order so we cannot avoid redundant walks.  Handling this
from within vn_reference_lookup_3 would be possible if we recursively
walk from there, filling up pieces.  It's still going to be somewhat
awkward to support in full generality I guess (gathering bitfield
writes for example).

Reply via email to