https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884

--- Comment #10 from s.baur...@tu-berlin.de ---
(In reply to Jonathan Wakely from comment #9)
> (In reply to s.bauroth from comment #7)
> > > The type of an integer constant is the first of the corresponding list
> > > in which its value can be represented.
> > These kind of sentences make me think gcc's behaviour is wrong. The number
> > can be represented in 32 bits.

> So the - sign is not part of the constant. The constant is 2147483648 and
> that doesn't fit in 32 bits. So it's a 64-bit type, and then that gets
> negated.
If the constant is not allowed to have a sign why try to press it into a signed
type? I know it's the standard that does it - but does it make any sense?

> That has been explained several times now.
And I said multiple times that I understand the reasoning.

> "I don't understand C and I won't read the spec" is not a GCC bug.
Not going to comment.

I'm not questioning your reading of the standard, I'm not saying gcc breaks the
standard. From a programmers perspective it's not intuitive that 'INT_MIN',
'-2147483647 - 1' and all the other forms (like int a = -2147483648;
printf("%i", a);' work, but '-2147483648' does not. And from a technical
perspective it is absolutely unnecessary. Despite what the standard says about
how to tokenize the number , it fits in 32 bits. It just does.

Reply via email to