https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88274

--- Comment #8 from rguenther at suse dot de <rguenther at suse dot de> ---
On Fri, 30 Nov 2018, jakub at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88274
> 
> --- Comment #7 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
> The regression started with r265241 BTW.
> 
> Here is a reassoc patch I wrote that also fixes this ICE:
> --- gcc/tree-ssa-reassoc.c.jj   2018-10-23 10:13:25.278875175 +0200
> +++ gcc/tree-ssa-reassoc.c      2018-11-30 11:13:37.232393154 +0100
> @@ -2537,8 +2537,23 @@ optimize_range_tests_xor (enum tree_code
>    if (!tree_int_cst_equal (lowxor, highxor))
>      return false;
> 
> +  exp = rangei->exp;
> +  scalar_int_mode mode = as_a <scalar_int_mode> (TYPE_MODE (type));
> +  int prec = GET_MODE_PRECISION (mode);
> +  if (TYPE_PRECISION (type) < prec
> +      || (wi::to_wide (TYPE_MIN_VALUE (type))
> +         != wi::min_value (prec, TYPE_SIGN (type)))
> +      || (wi::to_wide (TYPE_MAX_VALUE (type))
> +         != wi::max_value (prec, TYPE_SIGN (type))))
> +    {
> +      type = build_nonstandard_integer_type (prec, TYPE_UNSIGNED (type));
> +      exp = fold_convert (type, exp);
> +      lowxor = fold_convert (type, lowxor);
> +      lowi = fold_convert (type, lowi);
> +      highi = fold_convert (type, highi);
> +    }
>    tem = fold_build1 (BIT_NOT_EXPR, type, lowxor);
> -  exp = fold_build2 (BIT_AND_EXPR, type, rangei->exp, tem);
> +  exp = fold_build2 (BIT_AND_EXPR, type, exp, tem);
>    lowj = fold_build2 (BIT_AND_EXPR, type, lowi, tem);
>    highj = fold_build2 (BIT_AND_EXPR, type, highi, tem);
>    if (update_range_test (rangei, rangej, NULL, 1, opcode, ops, exp,
> @@ -2581,7 +2596,16 @@ optimize_range_tests_diff (enum tree_cod
>    if (!integer_pow2p (tem1))
>      return false;
> 
> -  type = unsigned_type_for (type);
> +  scalar_int_mode mode = as_a <scalar_int_mode> (TYPE_MODE (type));
> +  int prec = GET_MODE_PRECISION (mode);
> +  if (TYPE_PRECISION (type) < prec
> +      || (wi::to_wide (TYPE_MIN_VALUE (type))
> +         != wi::min_value (prec, TYPE_SIGN (type)))
> +      || (wi::to_wide (TYPE_MAX_VALUE (type))
> +         != wi::max_value (prec, TYPE_SIGN (type))))
> +    type = build_nonstandard_integer_type (prec, 1);
> +  else
> +    type = unsigned_type_for (type);
>    tem1 = fold_convert (type, tem1);
>    tem2 = fold_convert (type, tem2);
>    lowi = fold_convert (type, lowi);
> 
> Do we want that too, or is the exact type in which we compute these
> uninteresting?  Note, unfortunately in this case the enum type has
> TYPE_PRECISION 32, just TYPE_MAX_VALUE of 15, and we consider such conversions
> useless, so not really sure how can vrp work reliably with that.

I wonder that as well but it has been doing that (to some limited extent)
since forever.  The middle-end really only cares about TYPE_PRECISION
everywhere (but in VRP...).

So I'd happily substitute wi::min/max_value for TYPE_MIN/MAX_VALUE
in vrp_val_min/max ... with the expected "regressions" for
-fstrict-enums (and maybe Ada).

For your patch I think you don't need the mode-precision vs.
type-precision thing or is that simply to make the code not
un-optimal?  But yes, we shouldn't generate new arithmetic
in types with non-matching TYPE_MIN/MAX_VALUE.  Ada for
example does this in the underlying type and uses VCE
exprs to convert between those (IIRC).

The current state really asks for miscompilations (with -fstrict-enums).

Reply via email to