http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49595

--- Comment #7 from brian m. carlson <sandals at crustytoothpaste dot net> 
2011-06-30 20:16:50 UTC ---
(In reply to comment #5)
> (In reply to comment #3)
> > suppose the only appropriate behavior is not to provide any integer types
> > larger
> > than intmax_t with -std=c99.
> 
> Use -pedantic-errors if you want to reject programs using __int128

I don't.  I want to compile using C99 mode (I use scoped for) on a variety of
different compilers, taking advantage of 128-bit integers whenever possible
(for performance reasons).  The portable way to do this with the preprocessor
is to see if UINTMAX_MAX is set to the proper value.  In fact, GCC as it is
currently provides no way to determine whether 128-bit integers are in fact
supported at all on the current hardware, which leads to compilation failures
on, say, 32-bit SPARC.

The conformance issue, while important, is secondary.  I do expect GCC to
comply with the standard in standards mode, and I do expect any deviation from
the standards behavior in standards mode to be considered a bug worth fixing. 
But it isn't the primary goal here.

I also see no mention of intmax_t in the SysV ABI for x86-64. Can you be more
specific about what specifically would cause ABI breakage if intmax_t were to
become 128-bit on x86_64-pc-linux-gnu?  According to
<http://gcc.gnu.org/ml/gcc/2011-03/msg00095.html>, as long as libc behaves
appropriately, it shouldn't be a problem.

Reply via email to