On Fri, Mar 12, 2010 at 09:22:31PM +0100, sean finney wrote: > On Tue, Mar 09, 2010 at 11:38:23PM +0100, Iustin Pop wrote: > > actually, from /usr/lib/limits.h, -0x80000000 is indeed the minimum > > value for signed int32. a brief look at the tests failing show that this > > is exactly what upstream tries to test, the minimum and maximum valid > > signed values. whether they do this correctly or not i don't know yet. > > INT_MIN is (-INT_MAX)-1, i.e. it is not defined as a hexadecimal value, > though when printed as hexadecimal, that's also equivalent to 0x80000000 > (without the negation). adding a negation operator to this is what was > raising my eyebrows. it could be that as long as everything is a constant > that stuff is okay, but once you negate a non-constant value holding INT_MIN > you are definitely in trouble, and the level of meta with C++/templating > added to this protobuf compiling stuff makes me think that not everything > that appears constant is in fact constant.
Honestly this is way above my skills :), but I don't think the above is true. Constant or not, negation should work the same. #include <stdio.h> #include <limits.h> int main() { long long int j = LLONG_MAX; int check = 0; j -= 10; j += 5; j += 3; j += 2; check = (-j -1 ) == LLONG_MIN; printf("%d\n", check); return 0; } This small test program gives 1 with any combination of the flags below. Also note that protobuf has unittests for the 64 bit arithmetic routines, and those don't fail… >, it looks like gcc has some interesting flags that might prove > helpful in tracking down if this is the problem or not: > > -Wstrict-overflow > -Wtraditional > -Wtraditional-conversion > -Wtype-limits > -Wconversion > -Wsign-compare > -Wsign-conversion > -ftrapv > > and i think this is the flag that optimizes out the comparison that was > causing the problem in php: > > -fstrict-overflow (enabled in -O2) > > also, this one looks interesting: > > -fwrapv > This option instructs the compiler to assume that signed arithmetic > overflow of addition, subtraction and multiplication wraps around > using twos-complement representation. This flag enables some > optimizations and disables others. This option is enabled by > default for the Java front-end, as required by the Java language > specification. Hmm, this might make sense, except for my latest findings which were reported on the debian-arm list. First and very important, gcc 4.3 passes the tests, gcc 4.4 (the default now in sid) fails the tests. This, coupled with the fact that every single other architecture works fine, tells me that it's rather some kind of regression in gcc 4.4 on armel, rather than following or not the standard. Second, check my email on the list (http://lists.debian.org/debian-arm/2010/03/msg00073.html) about how a trivial, non-related change fixes the issue, which again makes me think it's a compiler issue rather than not-following the standard. I'll try though to do some tests with the above flags, but it'll take a while due to slow QEMU speed. I'm too spoiled by x86 multicore :) thanks, iustin
signature.asc
Description: Digital signature