https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96930
Jakub Jelinek <jakub at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |jakub at gcc dot gnu.org
--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
The testcase seems to be optimized into return a >> b; and already e.g. GCC 4.4
does that.
So it is unclear why this has been reported and what difference you found.
That said, given:
unsigned
foo (unsigned a, unsigned b)
{
return a / (unsigned long long) (1U << b);
}
unsigned
bar (unsigned a, unsigned b)
{
return a / (1U << b);
}
unsigned
baz (unsigned a, unsigned b)
{
unsigned long long c = 1U << b;
return a / c;
}
I see that while we optimize foo and bar into a >> b, by the:
/* (A / (1 << B)) -> (A >> B).
Only for unsigned A. For signed A, this would not preserve rounding
toward zero.
For example: (-1 / ( 1 << B)) != -1 >> B.
Also also widening conversions, like:
(A / (unsigned long long) (1U << B)) -> (A >> B)
or
(A / (unsigned long long) (1 << B)) -> (A >> B).
If the left shift is signed, it can be done only if the upper bits
of A starting from shift's type sign bit are zero, as
(unsigned long long) (1 << 31) is -2147483648ULL, not 2147483648ULL,
so it is valid only if A >> 31 is zero. */
but for baz we actually perform the shift in the wider mode unnecessarily,
because both operands are zero-extended from 32 bits.
Given:
unsigned
qux (unsigned a, unsigned b)
{
unsigned long long c = a;
unsigned long long d = b;
return c / d;
}
unsigned
corge (unsigned a, unsigned b)
{
return ((unsigned long long) a) / (unsigned long long) b;
}
we only optimize it in corge to return a / b; and not in qux, so some
fold-const optimization is not done on GIMPLE.