Hi! The following testcase triggers UB in simplify_binary_operation_1, in particular trueop1 is 2 and it is shifted up by 63. Later we want to shift it down (arithmetically) again by 63 and compare against the original value and only optimize if there is match, i.e. if trueop1 can be safely shifted up. In cases it can't we don't want to trigger UB, so the following patch just uses unsigned shift that is well defined and then implementation defined conversion to signed type we rely on everywhere. We still want mask to be signed so that the right shift is arithmetic.
Bootstrapped/regtested on x86_64-linux and i686-linux, committed to trunk as obvious. 2017-04-11 Jakub Jelinek <ja...@redhat.com> PR middle-end/80100 * simplify-rtx.c (simplify_binary_operation_1) <case IOR>: Perform left shift in unsigned HOST_WIDE_INT type. * gcc.dg/pr80100.c: New test. --- gcc/simplify-rtx.c.jj 2017-04-11 16:09:22.003071899 +0200 +++ gcc/simplify-rtx.c 2017-04-11 16:01:44.350830295 +0200 @@ -2741,8 +2741,8 @@ simplify_binary_operation_1 (enum rtx_co && CONST_INT_P (XEXP (op0, 1)) && INTVAL (XEXP (op0, 1)) < HOST_BITS_PER_WIDE_INT) { - int count = INTVAL (XEXP (op0, 1)); - HOST_WIDE_INT mask = INTVAL (trueop1) << count; + int count = INTVAL (XEXP (op0, 1)); + HOST_WIDE_INT mask = UINTVAL (trueop1) << count; if (mask >> count == INTVAL (trueop1) && trunc_int_for_mode (mask, mode) == mask --- gcc/testsuite/gcc.dg/pr80100.c.jj 2017-04-11 16:22:42.706047192 +0200 +++ gcc/testsuite/gcc.dg/pr80100.c 2017-04-11 16:22:29.000000000 +0200 @@ -0,0 +1,9 @@ +/* PR middle-end/80100 */ +/* { dg-do compile } */ +/* { dg-options "-O2" } */ + +long int +foo (long int x) +{ + return 2L | ((x - 1L) >> (__SIZEOF_LONG__ * __CHAR_BIT__ - 1)); +} Jakub