The testsuite/gcc.c-torture/execute/pr34971.c seems wrong to me.  The type of 
the expression x.b << 8 has size 8,  a size 8 integral type is a 64-bit type.  
If the result is a 64-bit type, then it's argument (x.b) was a 64-bit type.  In 
C++, we observed what they meant in the C language standard and ever so 
slightly clarified the language.  I don't think we viewed the semantics of C 
any different than what we wrote at the time.  The problem is that in C, sizeof 
bit-field doesn't work, but sizeof (bitfield << 0) does work.  For this to 
work, the size can't refer to the size in bits, so must refer to some other 
integral type.  The obvious one to use would be the smallest one that fits the 
size, or the underlying type.  In C++, we felt the underlying type was the 
right one.  The C standard does say that the type of the bitfield is what we 
call the underlying type in C++.  Because of this, the test case is wrong, and 
the C semantics are wrong.

Now, this is might be contentious to some, so, I wonder if a later C language 
DR or C language standard clarified this any?  Any language lawyer want to 
argue other semantics?

https://gcc.gnu.org/ml/gcc-patches/2014-11/msg00723.html is fallout of the 
wrong semantics by the C front-end and the entire issue evaporates once the 
code-gen is fixed.

I did expect bit-fields to be well understood and implemented at this point and 
was kinda amused to see that isn't the case yet.  I'd be interested in survey 
results of all (long||long long) > int C compilers.  clang for example, does 
the right thing.

Reply via email to