Hi! The following testcases are miscompiled (with tons of GIMPLE optimization disabled) because combine sees GE comparison of 1-bit sign_extract (i.e. something with [-1, 0] value range) with (const_int -1) (which is always true) and optimizes it into NE comparison of 1-bit zero_extract ([0, 1] value range) against (const_int 0). The reason is that simplify_compare_const first (correctly) simplifies the comparison to GE (ashift:SI something (const_int 31)) (const_int -2147483648) and then an optimization for when the second operand is power of 2 triggers. That optimization is fine for power of 2s which aren't the signed minimum of the mode, or if it is NE, EQ, GEU or LTU against the signed minimum of the mode, but for GE or LT optimizing it into NE (or EQ) against const0_rtx is wrong, those cases are always true or always false (but the function doesn't have a standardized way to tell callers the comparison is now unconditional).
The following patch just disables the optimization in that case. Bootstrapped/regtested on x86_64-linux and i686-linux, preapproved by Segher in the PR, committed to trunk so far. 2024-05-15 Jakub Jelinek <ja...@redhat.com> PR rtl-optimization/114902 PR rtl-optimization/115092 * combine.cc (simplify_compare_const): Don't optimize GE op0 SIGNED_MIN or LT op0 SIGNED_MIN into NE op0 const0_rtx or EQ op0 const0_rtx. * gcc.dg/pr114902.c: New test. * gcc.dg/pr115092.c: New test. --- gcc/combine.cc.jj 2024-05-07 18:10:10.415874636 +0200 +++ gcc/combine.cc 2024-05-15 13:33:26.555081215 +0200 @@ -11852,8 +11852,10 @@ simplify_compare_const (enum rtx_code co `and'ed with that bit), we can replace this with a comparison with zero. */ if (const_op - && (code == EQ || code == NE || code == GE || code == GEU - || code == LT || code == LTU) + && (code == EQ || code == NE || code == GEU || code == LTU + /* This optimization is incorrect for signed >= INT_MIN or + < INT_MIN, those are always true or always false. */ + || ((code == GE || code == LT) && const_op > 0)) && is_a <scalar_int_mode> (mode, &int_mode) && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT && pow2p_hwi (const_op & GET_MODE_MASK (int_mode)) --- gcc/testsuite/gcc.dg/pr114902.c.jj 2024-05-15 14:01:20.826717914 +0200 +++ gcc/testsuite/gcc.dg/pr114902.c 2024-05-15 14:00:39.603268571 +0200 @@ -0,0 +1,23 @@ +/* PR rtl-optimization/114902 */ +/* { dg-do run } */ +/* { dg-options "-O1 -fno-tree-fre -fno-tree-forwprop -fno-tree-ccp -fno-tree-dominator-opts" } */ + +__attribute__((noipa)) +int foo (int x) +{ + int a = ~x; + int t = a & 1; + int e = -t; + int b = e >= -1; + if (b) + return 0; + __builtin_trap (); +} + +int +main () +{ + foo (-1); + foo (0); + foo (1); +} --- gcc/testsuite/gcc.dg/pr115092.c.jj 2024-05-15 13:46:27.634649150 +0200 +++ gcc/testsuite/gcc.dg/pr115092.c 2024-05-15 13:46:12.052857268 +0200 @@ -0,0 +1,16 @@ +/* PR rtl-optimization/115092 */ +/* { dg-do run } */ +/* { dg-options "-O1 -fgcse -ftree-pre -fno-tree-dominator-opts -fno-tree-fre -fno-guess-branch-probability" } */ + +int a, b, c = 1, d, e; + +int +main () +{ + int f, g = a; + b = -2; + f = -(1 >> ((c && b) & ~a)); + if (f <= b) + d = g / e; + return 0; +} Jakub