> Currently for example in fold_sign_changed_comparison we produce > integer constants that are not inside the range of its type values > denoted by [TYPE_MIN_VALUE (t), TYPE_MAX_VALUE (t)]. For example > consider a type with range [10, 20] and the comparison created by > the Ada frontend: > > if ((signed char)t == -128) > > t being of that type [10, 20] with TYPE_PRECISION 8, like the constant > -128. So fold_sign_changed_comparison comes along and decides to strip > the conversion and convert the constant to type T which looks like ... > What do we want to do about that? Do we want to do anything about it? > If we don't want to do anything about it, why care about an exact > TREE_TYPE of integer constants if the only thing that matters is > signedness and type precision?
I don't think gcc should be converting anything to a type like t's unless it can prove that the thing it's converting is in the range of t's type. So it presumably should try to prove: (1) that -128 is not in the range of t's type; if it's not, then fold the comparison to false; otherwise (2) try to prove that -128 is in the range of t's type; if so, convert it. Otherwise do nothing. That said, this whole thing is a can of worms. Suppose the compiler wants to calculate t+1. Of course you do something like this: int_const_binop (PLUS_EXPR, t, build_int_cst (TREE_TYPE (t), 1), 0); But if 1 is not in the type of t, you just created an invalid value! Personally I think the right thing to do is to eliminate these types altogether somewhere early on, replacing them with their base types (which don't have funky ranges), inserting appropriate ASSERT_EXPRs instead. Probably types like t should never be seen outside the Ada f-e at all. Ciao, Duncan.