https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120305

--- Comment #16 from Jonathan Wakely <redi at gcc dot gnu.org> ---
(In reply to aneris from comment #13)
> (In reply to Jonathan Wakely from comment #12)
> > tl;dr if you want to compile with strict -std=c++20 -pedantic settings, you
> > need to strictly follow the rules of the standard.
> 
> I understand, though, at the very least, couldn't this be made a warning for
> -pedantic? I think that would be immensely useful.

I don't see how. The library code is just C++ code, it doesn't know what your
intention was when writing the code, or whether you wanted to use __int128
without warnings or not. And I don't think there's any preprocessor macro that
the library can even check to see if you used -pedantic or not.

> On another note, this code does compile with Clang. Apparently godbolt
> defaults to libstdc++ for clang instead of libc++ and that caused all the
> confusion.

Yes, we've been asking them to make that more obvious for years:
https://github.com/compiler-explorer/compiler-explorer/issues/3682

> Here: https://godbolt.org/z/j6fYMdjeo
> 
> I did some digging and this is because std::is_integral_v<__int128_t> is
> unconditionally true in libc++, unlike in libstdc++ where it's only true
> when the GNU extensions are enabled.

Yes, which is not conforming to the C++20 (and earlier) standards. It is
allowed by C++23 though, and I plan to treat that as a DR for earlier
standards. That's the topic of PR 96710.

> So I was wondering, is it really out of the question to promote __int128_t
> to an integer type by default like Clang? It'd make some code more
> compatible.

That's PR 96710. It was not allowed by the standard, but we fixed that:
https://cplusplus.github.io/LWG/issue3828
(I reported that issue myself, specifically because I want to resolve this
problem).

(In reply to aneris from comment #14)
> (In reply to aneris from comment #13)
> 
> > I did some digging and this is because std::is_integral_v<__int128_t> is
> > unconditionally true in libc++, unlike in libstdc++ where it's only true
> > when the GNU extensions are enabled.
> 
> Ok this is not it at all.
> 
> looks like libc++ defaults to a long long unsigned as the difference type
> for an iota_view with long unsigned.

Right, for libc++ std::integral<__int128> is always true, but it also looks
like they chose not to use the mechanisms introduced by 
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1522r1.pdf for
iota_view, so that range_difference_t<iota_view<uint64_t>> cannot represent the
size of some views.

https://godbolt.org/z/3M7sxh5qn

Reply via email to