https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60165

--- Comment #16 from Manuel López-Ibáñez <manu at gcc dot gnu.org> ---
(In reply to Vincent Lefèvre from comment #15)
> Well, detecting uninitialized variables is equivalent to generating better
> code. See the following functions. If you want to be able to remove the i ==
> 0 test in the first one (making generated code better), you'll solve the
> warning problem in the second one.

Not really, the pass that generates better code may happen after the pass that
warns. And heuristics that generate better code often hide warnings: PR18501

Neither generating better code nor warning are exact methods. They rely on
heuristics. The warnings are even more sensitive, because the goal of the
compiler is to generate better code, not to produce accurate warnings, so
information that is useful for warning is not maintained throughout.

And then, there are heuristics such as not warning for f(&c), by assuming that
it will initialize c, that are implemented because they are more often right
than wrong, but they are sometimes wrong. When it can be established that they
were wrong (after inlining, for example), then warnings are given.

You seem to be assuming that

void g1(int *p);
int f1()
{
 int c;
 g1(&c);
 return c; /* warn maybe-uninit */
}

int g2();
int f2()
{
 int c = g2();
 return c; /* warn maybe-uninit */
}
int h(int p)
{
 int c;
  if (p)
    c = 1;
 return c; /* warn maybe-uninit */
}

are equivalent in terms of being maybe uninitialized, and all should either be
warned or not warned about. But in practice, f1() and f2() are most often a
false positive and h() is almost always a real bug.

Reply via email to