https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106904

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
     Ever confirmed|0                           |1
   Last reconfirmed|                            |2022-12-07
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot 
gnu.org
             Status|UNCONFIRMED                 |ASSIGNED

--- Comment #5 from Richard Biener <rguenth at gcc dot gnu.org> ---
Note we diagnose

MEM <unsigned char[8]> [(char * {ref-all})vectp.4_10] = MEM <unsigned char[8]>
[(char * {ref-all})&wp];

where vectp.4_10 == &ps_5(D)->mp.hwnd;

that happens because SLP vectorization produces

  vectp.4_10 = &ps_5(D)->wp.hwnd;
  vect__1.5_11 = MEM[(int *)vectp.4_10];
  vectp.4_12 = vectp.4_10 + 4;
  vectp.4_14 = vectp.4_10 + 8;
  vect__1.7_15 = MEM[(int *)vectp.4_14];

and we then CSE the memcpy address in the following code to vectp.4_10:

  _3 = &ps_5(D)->wp;
  __builtin_memcpy (_3, &wp, 8);

the access diagnostics have the issue that they mis-interpret addresses
as more than just pointer arithmetic.  Eventually part of this could be
avoided by not introducing any non-invariant ADDR_EXPRs at least but
use POINTER_PLUS_EXPR where possible (like in the above case).  Alternatively
we could strip zero-offset components at these points.
  • [Bug tree-optimization/106904] ... rguenth at gcc dot gnu.org via Gcc-bugs

Reply via email to