https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110503

--- Comment #3 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
(In reply to Andrew Pinski from comment #2)
> 
> Oh and had:
>   # RANGE [irange] int [-128, 127]
>   _10 = (intD.6) _9;
>   # RANGE [irange] int [0, 1] NONZERO 0x1
>   _11 = 1 % _10;
> 
> I wonder if we could optimize `1 % b` into just `b != 1` (since 1 % 0 is
> undefined) which will further reduce things here.

That does not change the size of the loop though and we are still left with:
  size:   1 _10 = _9 == 1;
  size:   0 _11 = (unsigned int) _10;
  size:   1 _12 = -_11;
  size:   2 if (_12 > 2)

But at least now we just need to optimize the above to just `if (_9 == 1)`

Something like:
(simplify
 (gt (negative zero_one_value@0) INTEGER_CST@1)
 (if (wi::to_wide (@1) >= 1 && TYPE_UNSIGNED (TREE_TYPE (@1)))
  (ne @0 { build_zero_cst (TREE_TYPE (@1)); } )
  (if (wi::to_wide (@1) >= 0 && !TYPE_UNSIGNED (TREE_TYPE (@1)))
   (eq @0 { build_zero_cst (TREE_TYPE (@1)); } )
  )
 )
)

But I get the feeling this should be done in VRP instead of match ...

Reply via email to