https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94631
--- Comment #5 from Rich Felker <bugdal at aerifal dot cx> --- No, GCC's treatment also seems to mess up bitfields smaller than int and fully governed by the standard (no implementation-defined use of non-int types): struct foo { unsigned x:31; }; struct foo bar = {0}; bar.x-1 should yield UINT_MAX but yields -1 (same representation but different type) because it behaves as a promotion from a phantom type unsigned:31 to int rather than as having type unsigned to begin with. This can of course be observed by comparing it against 0. It's subtle and dangerous because it may also trigger optimization around UB of signed overflow when the correct behavior would be well-defined modular arithmetic.