On Tue, 2005-11-01 at 22:10 -0500, Kaveh R. Ghazi wrote: > > I prefer consistency in warnings, regardless of optimization level. I disagree and I think we have a significant contingency of users that would disagree -- if optimizations allow us to avoid false-positive warnings, then we should use that information to avoid those false positive warnings.
I would suggest you look at our testsuite and our PR database and see how many PRs we've got about false-positive warnings. Achieving consistency will merely increase the false-positives and as a result make the warning less useful IMHO. Then look at how many checkins we did to work around false-positive warnings in GCC itself -- they were significant and brought us little real value. Had our warning code been up-to-snuff that's work that we could have avoided and instead done something more productive. > We already say warning flags should not affect codegen and therefore > optimizations performed. IMHO the reverse should also hold, > optimization level should not affect warnings generated. Where in the world does that come from? The former does not imply the latter. > False positives for -Wuninitialized are easily corrected by > initializing at declaration. But for some people, that's just a make-work project; it's also in a way further pushing our ideas on software development to the end users. ie, *we* may think that adding the initialization is an easy correction, but others may violently disagree. > But lacking consistency can be annoying when a newly detected stray > false positive kills -Werror compilations for infrequently tested > configuration options, not because the code changed but because > different optimizations were performed. Yes, the lack of consistency is annoying, but the case you're taking about is IMHO far less annoying than having to go fix all those false-positives that our optimizers are currently avoiding. Plus, the set of newly detected false positives should be small, very small if we do our job right. ie, if we're triggering some new false positive, then that means that an optimizer somewhere hasn't done its job. > Think oddball configs in gcc bootstraps, or the occasional -O3 bootstraps > on any config yielding a new false positive. Leaving aside cpp > conditional code paths, I want to know the universe (or as close as I > can) of possible false positives with my one and only common bootstrap > and fix them all right away, and be done with warning repairs. Yes, and for those who want to know that universe of all the potential false positives, we can provide a switch to do that. > If the initialization is redundant, it won't matter to codegen. > I.e. if the optimizer is smart enough to eliminate the uninitialized > path, then IMHO it should be smart enough (is already smart enough?) > to eliminate the dead store at the declaration. Thus there shouldn't > be any pessimization penalty for silencing the warning. In fact I'll > go as far as saying I don't think 4 is ever a useful warning, > especially from a -Werror perspective. Actually, it plays directly into the set of warnings Mark was discussing -- it allows developers to identify coding errors that are leading to incorrect removal of code -- much like we do with our warnings about conditionals which always evaluate true or false. > So if I understand #3 correctly where early==on and late==off for the > new flag, then my preferred order is 3a, 2. Sorry, I can't consent to that. Adding new false positives for the existing -Wuninitialized option is IMHO a gigantic mistake. > I think 3b or 4a yield inherently inconsistent results by definition > and are therefore to be avoided. 3b gives you the *option* to get consistent results . 4a & 4b give you the *option* to distinguish between > > I'm not sure what 4b means. When this early&late switch is off does > -Wuninitialized degenerate to 2 (early-only) or 3b (late-only) in your > mind? If 2 then IMHO it's not horrible but not useful, if 3b then I > don't like it. > > --Kaveh > -- > Kaveh R. Ghazi [EMAIL PROTECTED]