[Bug c/97884] New: INT_MIN falsely expanded to 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884 Bug ID: 97884 Summary: INT_MIN falsely expanded to 64 bit Product: gcc Version: 10.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: s.baur...@tu-berlin.de Target Milestone: --- Created attachment 49583 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=49583&action=edit example file arm-none-eabi-gcc: when compiling printf("%i\n", -2147483648); printf("%i\n", (int)-2147483648); INT_MIN in the first call gets recognized as a 64 bit argument and split across r2 and r3. r1 remains untouched. In the second call, INT_MIN is correctly put into r1: 8: e3a02102mov r2, #-2147483648; 0x8000 c: e3e03000mvn r3, #0 10: e59f0010ldr r0, [pc, #16] ; 28 14: ebfebl 0 18: e3a01102mov r1, #-2147483648; 0x8000 1c: e59f0004ldr r0, [pc, #4]; 28 20: ebfebl 0 Source file attached was compiled with arm-none-eabi-gcc -v -save-temps -c start.c -o start.o gcc 10.2.0 was configured with --target=arm-none-eabi --prefix=$(PREFIX) --enable-interwork --enable-languages="c" --with-newlib --without-headers --disable-nls
[Bug c/97884] INT_MIN falsely expanded to 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884 --- Comment #1 from s.baur...@tu-berlin.de --- Created attachment 49584 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=49584&action=edit preprocessed source
[Bug c/97884] INT_MIN falsely expanded to 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884 --- Comment #4 from s.baur...@tu-berlin.de --- I am aware of the warning, I disagree with it's content. INT_MIN is an int, not a long long int. I understand why it is processed as a long long int internally, but that should not be visible from the outside world, at least imho.
[Bug c/97884] INT_MIN falsely expanded to 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884 --- Comment #7 from s.baur...@tu-berlin.de --- I do understand that +2147483648 is not an int. I am aware of how the 2s complement works. It seems to me the reason for INT_MIN being '(-2147483647 - 1)' instead of the mathematically equivalent '-2147483648' is the parser tokenizing the absolute value of the literal split from the sign of the literal. I'm also able to imagine why that eases parsing. But if splitting absolute value and sign - why not treat the absolute value as unsigned? Or maybe do a check 'in the end' (I have no knowledge of the codebase here...) whether one can reduce the size of the literal again? The fact is INT_MIN and '-2147483648' are both integers perfectly representable in 32 bits. I understand why gcc treats the second one differently (and clang does too) - I just think it's not right (or expectable for that matter). And if it's right, gcc should maybe warn about a 32bit literal being expanded to a larger type - not only in format strings. > The type of an integer constant is the first of the corresponding list in > which its value can be represented. These kind of sentences make me think gcc's behaviour is wrong. The number can be represented in 32 bits.
[Bug c/97884] INT_MIN falsely expanded to 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884 --- Comment #10 from s.baur...@tu-berlin.de --- (In reply to Jonathan Wakely from comment #9) > (In reply to s.bauroth from comment #7) > > > The type of an integer constant is the first of the corresponding list > > > in which its value can be represented. > > These kind of sentences make me think gcc's behaviour is wrong. The number > > can be represented in 32 bits. > So the - sign is not part of the constant. The constant is 2147483648 and > that doesn't fit in 32 bits. So it's a 64-bit type, and then that gets > negated. If the constant is not allowed to have a sign why try to press it into a signed type? I know it's the standard that does it - but does it make any sense? > That has been explained several times now. And I said multiple times that I understand the reasoning. > "I don't understand C and I won't read the spec" is not a GCC bug. Not going to comment. I'm not questioning your reading of the standard, I'm not saying gcc breaks the standard. From a programmers perspective it's not intuitive that 'INT_MIN', '-2147483647 - 1' and all the other forms (like int a = -2147483648; printf("%i", a);' work, but '-2147483648' does not. And from a technical perspective it is absolutely unnecessary. Despite what the standard says about how to tokenize the number , it fits in 32 bits. It just does.