https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99797

--- Comment #9 from Martin Uecker <muecker at gwdg dot de> ---

The behavior of GCC is dangerous as the example in comment #1 show. You can not
reason at all about the generated code. It is not just that the uninitialized
value causes some random choice but it creates situation where seemingly
impossible things can happen. Assume this propagates into another security
relevant function which when analyzed independently appears completely safe,
i.e. maintains some important property by carefully checking its inputs. But
just having an uninitialized read somewhere else compromises the integrity of
the whole program.

Of course, if this is UB than this is technically allowed from the standard's
point of view.  But what the standard allows is one question. What a good
compiler should do in case of undefined behavior is a completely different one.

The "optimize based on the assumption that UB can not happen" philosophy
amplifies even minor programming errors into something dangerous. This, of
course, also applies to other UB (in varying degrees). For signed overflow we
have -fsanitize=signed-integer-overflow which can help detect and mitigate such
errors, e.g. by trapping at run-time. And also this is allowed by UB. 

In case of UB the choice of what to do lies with the compiler, but I think it
is a bug if this choice is unreasonable and does not serve its users well.

Reply via email to