> Mark Mitchell wrote: >> Joe Buck wrote: >> ... >> I don't think we should give the user any such promise, and if we do >> give such a promise, we will never catch icc. The main problem is that >> we will no longer be able to optimize many loops. > > It's entirely possible that I was naive in assuming that this wouldn't have a > big optimization impact. Reiterating my response to Daniel, if it is in fact > the case that this is a major loss for optimization, then I would have to > retract my claim. > >> Here's a simple example. >> >> int blah(int); >> >> int func(int a, int b) { >> if (b >= 0) { >> int c = a + b; >> int count = 0; >> for (int i = a; i <= c; i++) >> count++; >> blah(count); >> } >> } > > Yes, I understand. > > I just didn't imagine that these kinds of opportunities came up very often. > (Perhaps that's because I routinely write code that can't be compiled well, > and so don't think about this situation. In particular, I often use unsigned > types when the underlying quantity really is always non-negative, and I'm > saddened to learn that doing that would result in inferior code.)
However, it's not clear that an "optimization" which alters side effects which have subsequent dependants is ever desirable (unless of course the goal is to produce the same likely useless result as fast as some other implementation may, but without any other redeeming benefits). As the example clearly shows, by assuming that signed overflow traps, when it may not, such an optimization actually alters the behavior of the code, such that it is nether consistent with it's previous behavior, nor any more correct (as technically the optimization is presuming it's perfectly fine to produce garbage faster), resulting in just that, to likely no ones benefit. However, for a target which does trap integer overflows, this optimization is both consistent with its otherwise behavior, and will result in a more efficient program, but not otherwise. Correspondingly, optimizations which presume wrapped overflows will be both consistent and more efficient for targets which factually do wrap signed overflows. Therefore it should be clear that optimizations based on implementation and/or target specific behaviors, must be based on the factual behavior to reliably yield consistent and efficient code. (As the simple fact is that some optimizations are not applicable to some target implementations if a program's un-optimized semantics are desired to be preserved, which is most often likely the case, although by definition they are not portable.) (but suspect that many of the opportunities for loop optimization which rely on knowing the value range of an iteration variable may be improved by VRP, although may not help in all circumstances, should help in many hopefully)