http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49820
--- Comment #7 from Eric Botcazou <ebotcazou at gcc dot gnu.org> 2011-07-24 06:56:13 UTC --- > I think there is a disconnect between ISO/IEC and their desire to produce > portable code, secure programming, and practical implementations. Confer: > > ME: "I want to check the flags register on x86/x64 to determine overflow > anfter an ADD or SUB operation." > ISO/IEC: "What's overflow? Our abstract machines do not overflow. And a > FLAGS register is not portable, so we're not making any provisions for it." That's right for the C language, not for other ISO/IEC languages, e.g. Ada. > Interestingly, GCC seems to add its own twist: it wants to produce optimized > code. GCC doesn't want anything, rather it can be argued that you asked it to optimize your code this way. One of the optimization activated at the -O2 level is `-fstrict-overflow' Allow the compiler to assume strict signed overflow rules, depending on the language being compiled. For C (and C++) this means that overflow when doing arithmetic with signed numbers is undefined, which means that the compiler may assume that it will not happen. This permits various optimizations. For example, the compiler will assume that an expression like `i + 10 > i' will always be true for signed `i'. This assumption is only valid if signed overflow is undefined, as the expression is false if `i + 10' overflows when using twos complement arithmetic. When this option is in effect any attempt to determine whether an operation on signed numbers will overflow must be written carefully to not actually involve overflow. This option also allows the compiler to assume strict pointer semantics: given a pointer to an object, if adding an offset to that pointer does not produce a pointer to the same object, the addition is undefined. This permits the compiler to conclude that `p + u > p' is always true for a pointer `p' and unsigned integer `u'. This assumption is only valid because pointer wraparound is undefined, as the expression is false if `p + u' overflows using twos complement arithmetic. See also the `-fwrapv' option. Using `-fwrapv' means that integer signed overflow is fully defined: it wraps. When `-fwrapv' is used, there is no difference between `-fstrict-overflow' and `-fno-strict-overflow' for integers. With `-fwrapv' certain types of overflow are permitted. For example, if the compiler gets an overflow when doing arithmetic on constants, the overflowed value can still be used with `-fwrapv', but not otherwise. The `-fstrict-overflow' option is enabled at levels `-O2', `-O3', `-Os'. so, by passing -O2, you effectively passed -fstrict-overflow. If you don't care that much about performances, then use -O1 or add -fno-strict-overflow or -fwrapv. Finally, the compiler will warn if you pass -Wstrict-overflow. > In the end, it would be a lot of help (to a minority of folks) if GCC moved > from its position of "all programs do not have undefined behavior" and > provided some intrinsics (where applicable) to help folks with the problem: > * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48580 > * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49467 There are more than a hundred of distinct cases of undefined behavior in the C language as standardized by ISO/IEC. Coping with them by default would lead to bigger binaries that run slower. The C language was designed to be efficient and GCC is sort of true to this vision in using all the liberty granted by the language. That has been GCC's consistent policy for more than a decade and is very unlikely to change in the near future. Instead GCC provides either options (like -fwrapv, -fno-strict-overflow or -fno-strict-aliasing) or intrinsics for specific needs. But, of course, at the expense of portability so this solution comes with its own drawbacks.