>>>> Joe Buck wrote: >>>> Here's a simple example. >>>> >>>> int blah(int); >>>> >>>> int func(int a, int b) { >>>> if (b >= 0) { >>>> int c = a + b; >>>> int count = 0; >>>> for (int i = a; i <= c; i++) >>>> count++; >>>> blah(count); >>>> } >>>> } >>> >>> Mark Mitchell wrote: >>> I just didn't imagine that these kinds of opportunities came up very often. >>> (Perhaps that's because I routinely write code that can't be compiled well, >>> and so don't think about this situation. In particular, I often use unsigned >>> types when the underlying quantity really is always non-negative, and I'm >>> saddened to learn that doing that would result in inferior code.) >> >> However, it's not clear that an "optimization" which alters side effects >> which have subsequent dependants is ever desirable (unless of course the >> goal is to produce the same likely useless result as fast as some other >> implementation may, but without any other redeeming benefits). > > On Tue, Jun 28, 2005 at 09:32:53PM -0400, Paul Schlie wrote: >> As the example clearly shows, by assuming that signed overflow traps, when >> it may not, such an optimization actually alters the behavior of the code, > > There is no such assumption. Rather, we assume that overflow does not > occur about what happens on overflow. Then, for the case where overflow > does not occur, we get fast code. For many cases where overflow occurs > with a 32-bit int, our optimized program behaves the same as if we had a > wider int. In fact, the program will work as if we had 33-bit ints. Far > from producing a useless result, the optimized program has consistent > behavior over a broader range. To see this, consider what the program > does with a=MAX_INT, b=MAX_INT-1. My optimized version, which always > calls blah(b+1), which is what a 33-bit int machine would do. It does > not trap. > > Since you made an incorrect analysis, you draw incorrect conclusions.
- fair enough, however it seems to me that assuming overflow does not occur and assuming overflows are trapped are logically equivalent? - But regardless, given that a and b are defined as arbitrary integer arguments, unless the value range of a+b is known to be less than INT_MAX, presuming otherwise may yield a different behavior for targets which wrap signed overflow (which is basically all of them). So unless by some magic the compiler can guess that the author of the code didn't actually desire the behavior previously produced by the compiler, the optimization will only likely produce a undesired result quicker, likely no one's benefit although neither behavior is necessarily portable. (and confess I don't understand the 33-bit int concept, as a 32-bit int target will still wrap or trap b+1 on overflow when computed, and for targets which wrap signed overflow it seems irrelevant, as the optimized result may not be consistent with the code's previously compiled results regardless, nor more correct, therefore seemingly most likely less useful.) Overall, I guess I still simply believe the the first rule of optimization is to preserve existing semantics unless explicitly authorized otherwise, and then only if accompanied with corresponding warnings for all potentially behavior altering assumptions applied.