On Mon, May 05, 2025 at 09:02:15AM +0200, Jakub Jelinek wrote: > The other option would be > + if (ll_bitsize != lr_bitsize) > + return 0; > if (!lr_and_mask.get_precision ()) > lr_and_mask = sign; > else > lr_and_mask &= sign; > and similarly in the other hunk.
Here is the second option in patch form, so far briefly tested on the testcase. 2025-05-05 Jakub Jelinek <ja...@redhat.com> PR tree-optimization/120074 * gimple-fold.cc (fold_truth_andor_for_ifcombine): For lsignbit && l_xor case, punt if ll_bitsize != lr_bitsize. Similarly for rsignbit && r_xor case, punt if rl_bitsize != rr_bitsize. Formatting fix. * gcc.dg/pr120074.c: New test. --- gcc/gimple-fold.cc.jj 2025-04-21 17:04:48.000000000 +0200 +++ gcc/gimple-fold.cc 2025-05-05 09:36:14.208753999 +0200 @@ -8334,6 +8334,8 @@ fold_truth_andor_for_ifcombine (enum tre ll_and_mask &= sign; if (l_xor) { + if (ll_bitsize != lr_bitsize) + return 0; if (!lr_and_mask.get_precision ()) lr_and_mask = sign; else @@ -8355,6 +8357,8 @@ fold_truth_andor_for_ifcombine (enum tre rl_and_mask &= sign; if (r_xor) { + if (rl_bitsize != rr_bitsize) + return 0; if (!rr_and_mask.get_precision ()) rr_and_mask = sign; else @@ -8762,7 +8766,7 @@ fold_truth_andor_for_ifcombine (enum tre wide_int lr_mask, rr_mask; if (lr_and_mask.get_precision ()) lr_mask = wi::lshift (wide_int::from (lr_and_mask, rnprec, UNSIGNED), - xlr_bitpos); + xlr_bitpos); else lr_mask = wi::shifted_mask (xlr_bitpos, lr_bitsize, false, rnprec); if (rr_and_mask.get_precision ()) --- gcc/testsuite/gcc.dg/pr120074.c.jj 2025-05-03 13:55:45.374319266 +0200 +++ gcc/testsuite/gcc.dg/pr120074.c 2025-05-03 13:54:53.264995823 +0200 @@ -0,0 +1,20 @@ +/* PR tree-optimization/120074 */ +/* { dg-do compile } */ +/* { dg-options "-O1 -fno-tree-copy-prop -fno-tree-forwprop -fno-tree-ccp" } */ + +int foo (int); +short a; +int b; + +int +bar (int d, int e) +{ + return d < 0 || d > __INT_MAX__ >> e; +} + +int +main () +{ + int f = bar ((b ^ a) & 3, __SIZEOF_INT__ * __CHAR_BIT__ - 2); + foo (f); +} Jakub