https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107541
--- Comment #4 from Aldy Hernandez <aldyh at gcc dot gnu.org> ---
This is an issue with the TRUNC_DIV_EXPR range-op entry optimizing divisions by
powers of 2 into right shifts. We're right shifting the mask by the shift
amount.
operator_div::fold_range():
...
...
tree t;
if (rh.singleton_p (&t))
{
wide_int wi = wi::to_wide (t);
int shift = wi::exact_log2 (wi);
if (shift != -1)
{
wide_int nz = lh.get_nonzero_bits ();
nz = wi::rshift (nz, shift, TYPE_SIGN (type));
r.set_nonzero_bits (nz);
}
}
The operands are:
[irange] int [-256, -255] NONZERO 0xffffff01
[irange] int [8, 8] NONZERO 0x8
Result before optimization:
[irange] int [-32, -31] NONZERO 0xffffffe1
Result after the optimization:
[irange] int [-32, -31] NONZERO 0xffffffe0
I'll take a look.