On Tue, 28 Jun 2005, Joe Buck wrote:
There is no such assumption. Rather, we assume that overflow does not occur about what happens on overflow. Then, for the case where overflow does not occur, we get fast code. For many cases where overflow occurs with a 32-bit int, our optimized program behaves the same as if we had a wider int. In fact, the program will work as if we had 33-bit ints. Far from producing a useless result, the optimized program has consistent behavior over a broader range. To see this, consider what the program does with a=MAX_INT, b=MAX_INT-1. My optimized version, which always calls blah(b+1), which is what a 33-bit int machine would do. It does not trap.
This point about 33-bit machines is interesting because it raises an optimisation scenario that hasn't been mentioned so far. Consider doing 32-bit integer arithmetic on 64-bit machines which only support 64-bit arithmetic instructions. On such machines you have to use sign-extensions or zero-extensions after 64-bit operations to ensure wrap-around semantics (unless you can prove that the operation will not overflow the bottom 32 bits, or that the value will not be used in a way that exposes the fact you're using 64-bit arithmetic). But -- if I have understood correctly -- if the 32-bit values are signed integers, a C compiler for such a machine could legitimately omit the sign-extension. Whereas for unsigned 32-bit values the C standard implies that you must zero-extend afterwards. I hadn't realised that. This has been an enlightening thread :) Nick