https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89501

--- Comment #10 from Linus Torvalds <torva...@linux-foundation.org> ---
(In reply to ncm from comment #9)
> What I don't understand is why it doesn't optimize away the check on
> (somecondition), since it is assuming the code in the dependent block always
> runs.

No, it very much doesn't assume that. The 'somecondition' really is dynamic.

What happens is simply that because gcc sees only a single assignment to the
variable, and that assignment is then limited by the subsequent value test to a
single value, gcc will just say "ok, any other place where that variable is
used, just use the known single value".

And it does that whether the 'if (somecondition)' path was taken or not.

It's a perfectly valid optimization. In fact, it's a lovely optimization, I'm
not at all complaining about the code generation.

It's just that as part of that (quite reasonable) optimization it also
optimized away the whole "oops, there wasn't really a valid initialization in
case the if-statement wasn't done".

Obviously that's undefined behavior, and the optimization is valid regardless,
but the lack of warning means that we didn't see that we had technically
undefined behavior that the compiler has silently just fixed up for us.

I think the cause of this all is quite reasonable and understandable, and I
also see why gcc really does want to throw away the undefined case entirely
(because otherwise you can get into the reverse situation where you warn
unnecessarily, because gcc isn't smart enough to see that some undefined case
will never ever actually happen). 

Plus I assume it simplifies things a lot to just not even have to track the
undefined case at all. You can just track "ok, down this path we have a known
value for this SSA, and we don't need to keep track of any inconvenient phi
nodes etc".

Reply via email to