https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122750

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
     Ever confirmed|0                           |1
   Last reconfirmed|                            |2025-11-18
             Status|UNCONFIRMED                 |NEW

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
I see

  vect__3.8_24 = .MASK_LOAD (_29, 8B, loop_mask_25, { 0, ... });
  vect__4.9_23 = (vector([4,4]) int) vect__3.8_24;
  vect__5.11_20 = vect_vec_iv_.10_22 * vect__4.9_23;
  vect__6.12_19 = vect__5.11_20 * { 10, ... };
  vect_x_12.15_14 = VIEW_CONVERT_EXPR<vector([4,4]) unsigned
int>(vect__6.12_19);
  vect_x_12.15_8 = VIEW_CONVERT_EXPR<vector([4,4]) unsigned
int>(vect_x_16.13_15);
  vect_x_12.15_7 = .COND_ADD (loop_mask_25, vect_x_12.15_8, vect_x_12.15_14,
vect_x_12.15_8);
  vect_x_12.14_31 = VIEW_CONVERT_EXPR<vector([4,4]) int>(vect_x_12.15_7);

so while we perform the reduction in an unsigned type (good!), we indeed
re-associated the multiplication.

Even .original has

    x = ((int) *(buf + (sizetype) i) * i) * 10 + x;

this is

/* Reassociate (X * CST) * Y to (X * Y) * CST.  This does not introduce
   signed overflow for CST != 0 && CST != -1.  */
(simplify
 (mult:c (mult:s@3 @0 INTEGER_CST@1) @2)
 (if (TREE_CODE (@2) != INTEGER_CST
      && single_use (@3)
      && !integer_zerop (@1) && !integer_minus_onep (@1))
  (mult (mult @0 @2) @1)))

at work, and I think it's correct and usually a good canonicalization
but difficult to undo I guess.

Any fix should probably handle explicit 10 * (i * buf[i]) as well.
SLSR was supposed to eventually handle such things.

Reply via email to