On Mon, 2005-10-31 at 17:11 -0800, Mark Mitchell wrote: > > Certainly if we can't prove f always returns a nonzero value, then a > > warning should be issued. If we do prove f always returns a nonzero > > value, then I think it becomes unclear if we should generate a warning. > > I don't think it's unclear; I think it should be a warning. :-) The > fact that f always returns a nonzero value may be a function of > something about the current environment; I think people want this > warning to tell them something about the code, semi-independent of the > current environment. It may be that I've got different ideas about how > this ought to work from some others in the community. We clearly disagree then. Though my 15+ years of working with GCC I've seen far more complaints about false positives than missing instances of this warning.
That could be a function of users complaining about what they can see (the false positive is noisy) vs what they can not see (a missing warning is by its nature silent). Even with that caveat in mind, I feel that generating new false positives to get those cases where we're missing a warning for a variable which was optimized away or where problem paths through the CFG were proven unexecutable would be a mistake -- unless the user has explicitly asked for the additional warnings. > We have to figure out where this warning ought to lie on this continuum > in order to figure how to implement it. I think the answer here is that there is no single answer that works best for everyone. I hate to say that because I don't like the always problematical "switch creep" in GCC, but I feel in this case giving the user control over whether or not they want these additional warnings is warranted. > I think most of our optimizer > people think the right point on the continuum lies pretty close to the > valgrind situation. In particular, the question they want to answer is > "is there a code path through this function, when compiled on this > architecture with these flags, etc., for which we might actually use an > uninitialized value?" Using that question, if function f always returns > non-zero, then we should not warn. Different levels of optimization > give different levels of approximation to the abstractly right answer > there; one would hope that as you crank up the optimization level, the > answers get better, as you prove more things about the program. > > However, I think the right question is the more lint-like "Is there a > code path through this function, when considered in isolation, and > without being too clever, under which an uninitialized value is used?" > Going back to the original example, if f always returns non-zero on all > architectures, and always will, then why bother writing the conditional? > The programmer who wrote that imagined that, at some point, f might > return zero, and then the code would be buggy. I would consider that a > "latent bug", and in my lint-like mindset, I want to fix it now. And it's my belief that we need to handle both classes of users. And note it doesn't take anything terribly complicated to trigger into cases where a variable appears to be uninitialized early, but is easily proven always initialized. > > > The late pass could then also warn for those variables which were > > marked by the first pass as maybe uninitialized, but which were > > not marked by the second pass as maybe uninitialized -- these > > are precisely those were optimizations either eliminated problem > > paths through the CFG or DCE eliminated the uninitialized uses. > > Why doesn't that boil down to just warning about all the variables > marked by the first pass, either right at the time of the first pass, or > later during the second pass? It boils down to being able to issue distinct warnings for the two cases. There's a difference between noting that a variable might have been used uninitialized and noting that a variable might have been used uninitialized, but those problem uses were removed due to optimizations or certain paths in the CFG were proven unexecutable. Jeff