Andreas Bogk <[EMAIL PROTECTED]> writes: > It is my fear that the existing behaviour of gcc when used without > -fwrapv breaks a lot of code out there that was written with the > implicit assumption that signed ints would overflow the way the > underlying machine integers do. More importantly, some of the code that > breaks is in checks against integer overflows, and thus relevant to > security: breaking the checks means opening vulnerabilities. A number > of IT security professionals shares my view of things here.
I am not interested in starting another lengthy discussion on this general issue. I just want to report that I have a working patch to generate warnings every time gcc modifies code relying on the fact that signed overflow is undefined, except for cases where signed loop indexes are assumed to not wrap around. I plan to start submitting this patch soon. My current intention, subject of course to the opinions of other maintainers, is to implement a -fstrict-overflow option, along the lines of -fstrict-aliasing. This will be enabled by default at -O2, as is the case for -fstrict-aliasing. -fno-strict-overflow will not be the same as -fwrapv, but it will inhibit optimizations which are only valid if signed overflow is undefined. The new -Wstrict-overflow warning will issue warnings for each case where gcc assumes that signed overflow is undefined. To be clear, this -Wstrict-overflow option generates a lot of false positives. That is because there are a lot of cases where gcc will optimize code in ways that would fail if the numbers involved were close to INT_MAX or INT_MIN. For example, when compiling the gcc/*.c files, I get 62 warnings. I would appreciate any comments about this plan. Some comments about those 62 warnings. First, each one of those warnings is a case where gcc implemented an optimization which could not have been implemented when using -fwrapv. Now, a couple of examples. In bb-reorder.c at line 266 we find this: find_traces_1_round (REG_BR_PROB_BASE * branch_threshold[i] / 1000, max_entry_frequency * exec_threshold[i] / 1000, count_threshold, traces, n_traces, i, &heap, number_of_rounds); REG_BR_PROB_BASE is a #define macro with the value 10000. gcc converts (10000 * branch_threshold[i] / 1000) into (10 * branch_threshold[i]). This minor optimization is only possible because signed overflow is undefined. If branch_threshold[i] had the value (INT_MAX / 10 - 1), say, the optimization will cause the expression to have a different result (assuming standards twos-complement arithmetic wrapping). I see 17 warnings of this form. In expmed.c we find several times (size - 1 < BITS_PER_WORD). gcc converts this to (size <= BITS_PER_WORD). This optimization is only possible because signed overflow is undefined. If size has the value INT_MIN, the optimization will cause the expression to have a different result. I see 23 warnings of this form. In fact, all the warnings on gcc/*.c have the same general form: a mathematical simplification which is only permissible if the values do not wrap around. Ian