Paul Eggert wrote: > No initiatives are needed, at least for C. Using uninitialized storage > is undefined behavior in the current C standard and this has been true > ever since C was standardized. I imagine C++ is similar. > ... > But in cases like these GCC > actually "knows" that variables are uninitialized and it sometimes > optimizes based on this knowledge. For example, for: > > _Bool f (void) { char *p; return !p; } > > gcc -O2 (GCC 11.1.1 20210531 (Red Hat 11.1.1-3)) "knows" that P is > uninitialized and generates code equivalent to that of: > > _Bool f (void) { return 1; } > > That is, GCC optimizes away the access to p's value, which GCC can do > because the behavior is undefined.
Ouch, I seriously underrated this warning. Thanks for correcting me. Indeed, GCC does this optimization already since version 4.3. > > If GCC ever > > infers that it is "certainly uninitialized", we could defeat that > > through a use of 'volatile', such as > > Yes, some use of volatile should do the trick for GCC (which is what my > patch did). However, one would still have problems with a debugging > implementation, e.g., if GCC ever supports an -fsanitize=uninitialized > option that catches use of uninitialized storage. Yes, this test probably fails under 'valgrind'. I don't know a fail-safe workaround. When the purpose is to verify whether a memory region that has gone out-of-scope has been cleared, it necessarily means to access this memory region as if it were uninitialized. Bruno