If we look at this testcase, we have a function like: int foo(unsigned short x) { unsigned short y; y = x > 32767 ? x - 32768 : 0; return y; }
x is promoted to a signed int by the front-end as the type of 32768 is signed. So when we pass 65535 to foo (like in the testcase), we get some large negative number for (signed int)x and then we are subtracting more from the number which causes us to have an overflow. Does this sound right? Should the testcase have -fwrapv or change 32768 to 32768u? (This might also be wrong in SPEC and gzip too, I have not looked yet). Thanks, Andrew Pinski