On 07/24/2015 03:55 PM, Manuel López-Ibáñez wrote:
I think it is also a matter of the particular warning and on the
balance of true positives vs. false positives in typical code-bases.
In this case, returning NULL in any code-path from a returns_nonnull
function, even if the path is currently unreachable via some macro
configuration, seems a bad idea. Of course, I'm open to be proven
wrong :-)
Again, this is a matter of personal preference and I think we come from
two totally different mindsets when it comes to this class of warnings.
Moreover, this warning works in exactly the same cases as
__attribute__((nonnull)) does for function arguments, and so far those
haven't been a source of false positives.
I'm sure I could write one ;-) And given that a FE based version of this
will only catch explicit NULLs in argument lists, I consider it so weak as
to be virtually useless.
Well, it is catching exactly all the cases that you were testing for
in your original -Wnull-attribute patch ;-)
Which is an argument that we should have more complex testcases --
there's certainly real world code where this kind of stuff is an issue.
I'm very much in favour of this, but only for things that cannot
reliably be warned from the FE. Clang has shown that it is possible to
improve much more the accuracy of warnings in the FE and still compile
faster than GCC by performing some kind of fast CCP (and VRP?) in the
FE (or make the CCP and VRP passes occur earlier and even without
optimization):
And my assertion is that for things like we're discussing, you need the
analysis & optimizations both to expose the non-trivial cases and prune away
those which are false positives. I consider exposing the non-trivial cases
and pruning away false positives the most important aspect of this kind of
work.
Based on my experience, I'm not convinced that moving warnings to the
middle-end is a good idea. The middle-end does a very poor job
keeping sensible locations when doing transformations and it will not
only introduce false positives, it will also remove true positives.
The diagnostics often refer to the wrong variable or code that is not
what the user originally wrote, which makes very hard to understand
the problem. One only has to read all the reports we have about
-Warray-bounds, -Wuninitialized, -Wstrict-overflow and other
middle-end warnings.
Again, I think we come at this from fundamentally different viewpoints.
I'm willing to compromise on locations and I think we have a
different viewpoint of what a true positive is. If it's on an
unreachable path, then I consider warning about it a false positive.
Those are based on decades of writing the analysis, then using the
result then being horrified when I see things that are missed because a
particular warning is so weak that it's nearly useless (-Warray-bounds
comes to mind).
For example, Clang is currently able to warn about the following case
without any optimization, while GCC cannot at any optimization level:
int f(bool b) {
int n;
if (b)
n = 1;
return n;
}
ANd this is a known failing in GCC. Interestingly enough I believe my
proposed revamping of how we handle uninitialized warnings would catch
this IIRC.
Another example is -Warray-bounds, for which the warnings from Clang
are not only more precise:
Our implementation of -Warray-bounds sucks because it only warns when we
have a range that collapses down to a singularity that feeds into an
array reference. It does absolutely no may-be-out-of-bounds analysis.
For that reason I consider -Warray-bounds as it stands today
fundamentally broken.
but they work even without -O2:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=35587
I don't really care about that. Others do, I don't. But this is
something folks can fix if the consensus is that we want these warnings
with the optimizer off. It's a trade-off, just like just about
everything in this space.
jeff