https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112807

--- Comment #1 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Ah, the problem is that lower_addsub_overflow was written for lowering of
large/huge _BitInt operations, so for .{ADD,SUB}_OVERFLOW where one of the 2
operands is in the x86_64 case at least 129 bit or the result is a complex type
with 129+ bit element type.
That is the case here, because the first operand is _BitInt(256), but as result
is just 32-bit and VRP tells us the first argument is in [0, 0xffffffff] range
which needs 32-bits unsigned and the second argument is in [-2, 1] range, we
don't actually cast the second argument to a large/huge _BitInt type and so it
fails miserably.
Now, we could fix that either by tweaking the
  tree type0 = TREE_TYPE (arg0);
  tree type1 = TREE_TYPE (arg1);
  if (TYPE_PRECISION (type0) < prec3)
    {
      type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
      if (TREE_CODE (arg0) == INTEGER_CST)
        arg0 = fold_convert (type0, arg0);
    }
  if (TYPE_PRECISION (type1) < prec3)
    {              
      type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
      if (TREE_CODE (arg1) == INTEGER_CST)
        arg1 = fold_convert (type1, arg1);
    }
such that if bitint_precision_kind (prec3) < bitint_prec_large we actually use
smallest possible bitint_prec_large, or during the preparation phase check if
.{ADD,SUB}_OVERFLOW with small/medium return and both operands with
range_for_prec absolute values also small/medium we actually turn it into a
small/medium .{ADD,SUB}_OVERFLOW and expand just the casts.

Reply via email to