On Mon, 5 May 2025, Jakub Jelinek wrote:

> Hi!
> 
> The following testcase ICEs because of a mismatch between wide_int
> precision, in particular lr_and_mask has 32-bit precision while sign has
> 16-bit.
> 
> decode_field_reference ensures that {ll,lr,rl,rr}_and_mask has
> {ll,lr,rl,rr}_bitsize precision, so the
>         ll_and_mask |= sign;
> and
>         rl_and_mask |= sign;
> and
>         ll_and_mask &= sign;
> and
>         rl_and_mask &= sign;
> cases should work right, sign has in those cases {ll,rl}_bitsize
> precision.  The problem is that nothing until much later guarantees
> that ll_bitsize == lr_bitsize or rl_bitsize == rr_bitsize.
> In the testcase there is
> ((b ^ a) & 3) < 0
> where a is 16-bit and b is 32-bit, so it is the lsignbit handling,
> and because of the xor the xor operand is moved to the *r_and_mask, so
> with ll_and_mask being 16-bit 3 and lr_and_mask being 32-bit 3.
> 
> Now, either b in the above case would be INTEGER_CST, in that case
> if rr_arg was also INTEGER_CST we'd use the l_const && r_const case
> and try to handle it, or we'd run into (though much later)
>       if (ll_bitsize != lr_bitsize || rl_bitsize != rr_bitsize
> ...
>         return 0;
> 
> The following patch fixes it by dealing with a different precision
> using wide_int::from.
> 
> The other option would be
> +       if (ll_bitsize != lr_bitsize)
> +         return 0;
>         if (!lr_and_mask.get_precision ())
>           lr_and_mask = sign;
>         else
>           lr_and_mask &= sign;
> and similarly in the other hunk.
> 
> And yet another option would be to compute
> the sign
>       wide_int sign = wi::mask (ll_bitsize - 1, true, ll_bitsize);
>       /* If ll_arg is zero-extended and we're testing the sign bit, we know
>          what the result should be.  Shifting the sign bit out of sign will 
> get
>          us to mask the entire field out, yielding zero, i.e., the sign bit of
>          the zero-extended value.  We know the masked value is being compared
>          with zero, so the compare will get us the result we're looking
>          for: TRUE if EQ_EXPR, FALSE if NE_EXPR.  */
>       if (lsignbit > ll_bitsize && ll_unsignedp)
>         sign <<= 1;
> once again for the lr_and_mask and rr_and_mask cases using rl_bitsize.
> 
> As we just return 0; anyway unless l_const && r_const, if l_const & r_const
> are false it doesn't really matter what is chosen, but for the const
> cases it matters and I'm not sure what is right.  So the second option
> might be safest.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk
> or shall I go with the second option?

I'd go with the safer option unless Alex can argue the other one is
correct.

Richard.

> 2025-05-05  Jakub Jelinek  <ja...@redhat.com>
> 
>       PR tree-optimization/120074
>       * gimple-fold.cc (fold_truth_andor_for_ifcombine): Use wide_int::from
>       when trying to mask ?r_and_mask with sign in case it has different
>       precision.  Formatting fix.
> 
>       * gcc.dg/pr120074.c: New test.
> 
> --- gcc/gimple-fold.cc.jj     2025-04-21 17:04:48.000000000 +0200
> +++ gcc/gimple-fold.cc        2025-05-03 12:43:29.643908582 +0200
> @@ -8337,7 +8337,8 @@ fold_truth_andor_for_ifcombine (enum tre
>         if (!lr_and_mask.get_precision ())
>           lr_and_mask = sign;
>         else
> -         lr_and_mask &= sign;
> +         lr_and_mask &= wide_int::from (sign, lr_and_mask.get_precision (),
> +                                        UNSIGNED);
>         if (l_const.get_precision ())
>           l_const &= wide_int::from (lr_and_mask,
>                                      l_const.get_precision (), UNSIGNED);
> @@ -8358,7 +8359,8 @@ fold_truth_andor_for_ifcombine (enum tre
>         if (!rr_and_mask.get_precision ())
>           rr_and_mask = sign;
>         else
> -         rr_and_mask &= sign;
> +         rr_and_mask &= wide_int::from (sign, rr_and_mask.get_precision (),
> +                                        UNSIGNED);
>         if (r_const.get_precision ())
>           r_const &= wide_int::from (rr_and_mask,
>                                      r_const.get_precision (), UNSIGNED);
> @@ -8762,7 +8764,7 @@ fold_truth_andor_for_ifcombine (enum tre
>        wide_int lr_mask, rr_mask;
>        if (lr_and_mask.get_precision ())
>       lr_mask = wi::lshift (wide_int::from (lr_and_mask, rnprec, UNSIGNED),
> -                       xlr_bitpos);
> +                           xlr_bitpos);
>        else
>       lr_mask = wi::shifted_mask (xlr_bitpos, lr_bitsize, false, rnprec);
>        if (rr_and_mask.get_precision ())
> --- gcc/testsuite/gcc.dg/pr120074.c.jj        2025-05-03 13:55:45.374319266 
> +0200
> +++ gcc/testsuite/gcc.dg/pr120074.c   2025-05-03 13:54:53.264995823 +0200
> @@ -0,0 +1,20 @@
> +/* PR tree-optimization/120074 */
> +/* { dg-do compile } */
> +/* { dg-options "-O1 -fno-tree-copy-prop -fno-tree-forwprop -fno-tree-ccp" } 
> */
> +
> +int foo (int);
> +short a;
> +int b;
> +
> +int
> +bar (int d, int e)
> +{
> +  return d < 0 || d > __INT_MAX__ >> e;
> +}
> +
> +int
> +main ()
> +{
> +  int f = bar ((b ^ a) & 3, __SIZEOF_INT__ * __CHAR_BIT__ - 2);
> +  foo (f);
> +}
> 
>       Jakub
> 
> 

-- 
Richard Biener <rguent...@suse.de>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)

Reply via email to