On Thu, 26 Nov 2020, Jakub Jelinek wrote:
> Hi!
>
> My recent wide_int_binop changes caused ICE on this testcase.
> The problem is that for shift where amount has MSB set now fails to optimize
> into a constant (IMHO we should treat out of bounds shifts the same later),
> but there is a precedent for that already - e.g. division by zero fails
> to optimize into a constant too. I think it is better if path isolation
> checks for these UBs and does something the user chooses (__builtin_trap vs.
> __builtin_unreachable, and either a deferred warning about the UB or
> nothing).
> This patch just doesn't optimize if int_const_binop failed.
Btw, when looking I figured that int_const_binop loses the sign
of arg2 when folding a signed shift by an unsigned shift amount
and thus these out-of-bound shifts get interpreted as signed
by wide_int_binop even though they are not. That said,
case RSHIFT_EXPR:
if (wi::neg_p (arg2))
return false;
will, for wide_int, always return true for values with MSB set
(we implicitely pass SIGNED to the signop operand here). That
will now yield "inconsistent" behavior for X >> 1000 vs
X >> -1u for example. The old behavior wasn't really better
in principle but consistent in the final result. A fix would
be to pass in an explicit sign of the second argument.
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
OK.
Richard.
> 2020-11-26 Jakub Jelinek <[email protected]>
>
> PR tree-optimization/97979
> * match.pd ((X {&,^,|} C2) << C1 into (X << C1) {&,^,|} (C2 << C1)):
> Only optimize if int_const_binop returned non-NULL.
>
> * gcc.dg/pr97979.c: New test.
> * gcc.c-torture/compile/pr97979.c: New test.
>
> --- gcc/match.pd.jj 2020-11-24 09:02:25.000000000 +0100
> +++ gcc/match.pd 2020-11-25 16:03:48.298339546 +0100
> @@ -3119,7 +3119,8 @@ (define_operator_list COND_TERNARY
> (shift (convert?:s (bit_op:s @0 INTEGER_CST@2)) INTEGER_CST@1)
> (if (tree_nop_conversion_p (type, TREE_TYPE (@0)))
> (with { tree mask = int_const_binop (shift, fold_convert (type, @2),
> @1); }
> - (bit_op (shift (convert @0) @1) { mask; }))))))
> + (if (mask)
> + (bit_op (shift (convert @0) @1) { mask; })))))))
>
> /* ~(~X >> Y) -> X >> Y (for arithmetic shift). */
> (simplify
> --- gcc/testsuite/gcc.dg/pr97979.c.jj 2020-11-25 16:20:05.115357587 +0100
> +++ gcc/testsuite/gcc.dg/pr97979.c 2020-11-25 16:19:55.799462344 +0100
> @@ -0,0 +1,13 @@
> +/* PR tree-optimization/97979 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fno-tree-ccp" } */
> +
> +short a = 0;
> +int b = 0;
> +
> +void
> +foo (void)
> +{
> + unsigned short d = b;
> + a = d >> -2U; /* { dg-warning "right shift count >= width of type" }
> */
> +}
> --- gcc/testsuite/gcc.c-torture/compile/pr97979.c.jj 2020-11-25
> 16:06:04.912803686 +0100
> +++ gcc/testsuite/gcc.c-torture/compile/pr97979.c 2020-11-25
> 16:06:39.953409746 +0100
> @@ -0,0 +1,7 @@
> +/* PR tree-optimization/97979 */
> +
> +int
> +foo (int x)
> +{
> + return (x & 0x123) << -3;
> +}
>
> Jakub
>
>
--
Richard Biener <[email protected]>
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend