https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119104

--- Comment #5 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Alejandro Colomar from comment #4)
> (In reply to Jakub Jelinek from comment #3)
> > The analyzer will be hopefully improved for GCC 16, there was just a minimal
> > support added so that the analyzer tests didn't regress.
> > The normal -Wnonnull warning actually uses range information already, so it
> > the range suggests that the size can't be zero and NULL is passed, there is
> > a warning emitted.
> 
> Then, I think memcpy(3) et al. should not use this new attribute until it's
> stable and has no important regressions, such as this one.  It will be fine
> to use the new attribute once it proves to be safe.

They should.  It is more important not to force UB on cases where there is no
harm (i.e. runtime memcpy (NULL, NULL, 0) and similar) than to get some extra
warnings in rare cases.

> I agree with wanting to not want to trigger UB within memcpy(3).
> 
> But I don't think the approach taken was the right one.  I think it should
> have been fixed in implementations first, and then --when we know nothing
> has regressed--, standardize it.

Runtime sanitization is not a safety net, it detects cases which static
analyzers can't detect even in theory.  Without LTO, static analyzers see just
a single TU, even with LTO they see just a single binary or library and not the
whole program, and for compile time and memory reasons they can't analyze all
possible paths from main anyway, only a couple of callers at a time.  And the
common case with pointers or integers is that analyzers just don't know if they
could be NULL or not (or could be 0 or not).  They warn about obvious cases
where it is proven that it would likely be NULL in some path, but analyzers
can't just warn on each int foo (int *p) { return *p; } that p could be NULL
when they really don't know it could be, that would be so many false positives
that nobody would use it.

Reply via email to