https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78115
--- Comment #2 from Marc Glisse <glisse at gcc dot gnu.org> --- For the first part, when we transform (X+C1)+C2 to X+(C1+C2), we check that C1+C2 doesn't overflow. But if C1+C2 would give INT_MIN, we still have the possibility to generate X-INT_MIN without going to an unsigned type. For the second part, I am not sure either. Some guesses: X - INT_MIN becomes X ^ INT_MIN. Since we know that all the bits set in INT_MIN (the top one) are also set in X (< 0), this becomes X & INT_MAX. In the branch X >= 0, X & INT_MAX would fold to X (kind of like a neutral element) so we can do the operation unconditionally. A bit far-fetched maybe...