Reading signed bitfield value when it needs to be extended to larger type (for example assigning 24-bit value to int) results in zero extending instead of sign extending when compiled with g++ using optimizations (-O1 or higher). Compiling the same code with gcc or disabling optimizations makes the problem disappear.
The following code reproduces the problem: #include <stdio.h> struct TEST_STRUCT { int f_8 : 8; int f_24 : 24; }; int main () { struct TEST_STRUCT x; int a = -123; x.f_24 = a; printf("a=%d (%08X)\n", (int)a, (int)a); printf("x.f_24=%d (%08X)\n", (int)x.f_24, (int)x.f_24); if ((int)x.f_24 != (int)a) printf("test failed\n"); else printf("test ok\n"); return 0; } //////////////////////////////// Expected correct result: a=-123 (FFFFFF85) x.f_24=-123 (FFFFFF85) test ok Faulty result: a=-123 (FFFFFF85) x.f_24=16777093 (00FFFF85) test failed It is a regression as gcc 3.4.6 did not have this bug. Also this problem may be related to http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32346 and http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30332 -- Summary: Invalid code generation for reading signed negative bitfield value (g++ optimization) Product: gcc Version: 4.1.2 Status: UNCONFIRMED Severity: normal Priority: P3 Component: regression AssignedTo: unassigned at gcc dot gnu dot org ReportedBy: siarhei dot siamashka at gmail dot com GCC build triplet: i686-pc-linux-gnu GCC host triplet: i686-pc-linux-gnu GCC target triplet: i686-pc-linux-gnu http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32687