> Maybe we should detect overflow as if the input and output were signed > while computing an unsigned result. As far as I can see int_const_binop_1 > does detect overflow as if operations were signed (it passes 'false' as > uns to all double-int operations rather than TYPE_UNSIGNED). > For example sub_with_overflow simply does > > neg_double (b.low, b.high, &ret.low, &ret.high); > add_double (low, high, ret.low, ret.high, &ret.low, &ret.high); > *overflow = OVERFLOW_SUM_SIGN (ret.high, b.high, high); > > which I believe is wrong. Shouldn't it be > > neg_double (b.low, b.high, &ret.low, &ret.high); > HOST_WIDE_INT tem = ret.high; > add_double (low, high, ret.low, ret.high, &ret.low, &ret.high); > *overflow = OVERFLOW_SUM_SIGN (ret.high, tem, high); > > ? Because we are computing a + (-b) and thus OVERFLOW_SUM_SIGN > expects the sign of a and -b, not a and b to verify against the > sign of ret.
But int_const_binop_1 is called from int_const_binop, so why would we want to introduce any overflow for unsigned types other than sizetypes? > I'm sceptical. Where do you compute the size expression for variable-sized > arrays? I suppose with the testcase in the initial patch I can then inspect > myself what actually happens? Sure, but we already went through this in the PR. It's because of the formula used for the length of variable-sized arrays, which needs to handle the case of superflat arrays. -- Eric Botcazou