https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80281
Jakub Jelinek <jakub at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |jakub at gcc dot gnu.org
--- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
It has been just latent before that.
Let's change the testcase to:
int
main ()
{
volatile int a = 0;
long long b = 2147483648LL;
int c = a % 2;
int x = ((int) -b + c) % -2147483647;
if (x != -1)
__builtin_abort ();
return 0;
}
so that it works also on ILP32 without triggering UB.
Already in *.original it looks wrong:
int x = (c - (int) b) % 2147483647;
which is incorrect, because originally the negation has been performed in a
wider type where it didn't trigger UB, but in the narrower type it does.
In the original testcase we get -2147483648LL converted to -2147483648 int + 0.
But in what we have in *.original we have 2147483648LL converted to -2147483648
and compute 0 - (-2147483648), which is UB.
So, if we want to transform (int) -b + c into c - (int) b, we actually need to
do the subtraction in unsigned int type and convert back (until we have
separate wrapping signed arithmetic opcodes in GIMPLE if ever).
Thus it should be int x = (int) ((unsigned) c - (unsigned) b) % -2147483647;