http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48696
--- Comment #7 from Linus Torvalds <torva...@linux-foundation.org> 2011-04-20 15:30:17 UTC --- (In reply to comment #2) > > I'm not sure where to best address this, rather than throwing in again > the idea of lowering bitfield accesses early on trees. So my gut feel is that getting rid of the bitfield as early as possible, and turning all bitfield accesses into regular load/shift/mask/store operations is always the right thing to do. I also think that doing it with the size that the user specified is generally a good idea, ie I sincerely hope that gcc hasn't thrown away the "unsigned int" part of the type when it does the lowering of the bitfield op. If gcc has forgotten the underlying type, and only looks at the bitfield size and offset, gcc will likely never do a good job at it unless gcc gets _really_ smart and looks at all the accesses around it and decides "I need to do these as 'int' just because (ie in the example, the "unsigned" base type is as important as is the "bits 0..5" range information). So I suspect it's better to just do a totally mindless expansion of bitfield accesses early, and then use all the regular optimizations on them. Rather than keep them around as bitfields and try to optimize at some higher level. In an ironic twist, the real program that shows this optimization problem is "sparse" (the kernel source code checker), which can actually do a "linearize and optimize" the test-case itself, and in this case does this all better than gcc (using its "dump the linearized IR" test-program): [torvalds@i5 ~]$ ./src/sparse/test-linearize test.c test.c:7:5: warning: symbol 'show_bug' was not declared. Should it be static? show_bug: .L0x7f4cf7b93010: <entry-point> load.32 %r2 <- 0[%arg1] and.32 %r3 <- %r2, $-64 store.32 %r3 -> 0[%arg1] lsr.32 %r7 <- %r3, $6 cast.32 %r8 <- (16) %r7 ret.32 %r8 Heh. Sparse may get a lot of other things wrong, but it got this particular case right.