On Tue, Mar 09, 2010 at 11:38:23PM +0100, Iustin Pop wrote: > actually, from /usr/lib/limits.h, -0x80000000 is indeed the minimum > value for signed int32. a brief look at the tests failing show that this > is exactly what upstream tries to test, the minimum and maximum valid > signed values. whether they do this correctly or not i don't know yet.
INT_MIN is (-INT_MAX)-1, i.e. it is not defined as a hexadecimal value, though when printed as hexadecimal, that's also equivalent to 0x80000000 (without the negation). adding a negation operator to this is what was raising my eyebrows. it could be that as long as everything is a constant that stuff is okay, but once you negate a non-constant value holding INT_MIN you are definitely in trouble, and the level of meta with C++/templating added to this protobuf compiling stuff makes me think that not everything that appears constant is in fact constant. fwiw, it looks like gcc has some interesting flags that might prove helpful in tracking down if this is the problem or not: -Wstrict-overflow -Wtraditional -Wtraditional-conversion -Wtype-limits -Wconversion -Wsign-compare -Wsign-conversion -ftrapv and i think this is the flag that optimizes out the comparison that was causing the problem in php: -fstrict-overflow (enabled in -O2) also, this one looks interesting: -fwrapv This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables others. This option is enabled by default for the Java front-end, as required by the Java language specification. so it might be interesting to see if using the first set of flags produces any useful warnings/errors, or if the last two flags gets the tests to pass. i'm starting with the former, but if all goes as planned i'll get access to a sheevaplug this weekend and will try the latter as well. sean
signature.asc
Description: Digital signature