https://gcc.gnu.org/bugzilla/show_bug.cgi?id=15826
--- Comment #16 from Richard Biener <rguenth at gcc dot gnu.org> --- (In reply to Steven Bosscher from comment #6) > The tree dump for the original test case now looks like this for me: > > ;; Function foo (foo) > > foo (p) > { > <bb 2>: > return (unsigned int) ((BIT_FIELD_REF <*p, 8, 0> & 1) != 0); > > } > > > > ;; Function bar (bar) > > bar (p) > { > <bb 2>: > return (unsigned int) p->bit; > > } > > > > The resulting assembler output is the same, but I imagine VRP should be able > to fold away the "& 1" test. I don't know if the BIT_FIELD_REF itself > should be optimized away, but I guess so. Consider the following test case: > > struct s > { > unsigned int bit:1; > }; > > unsigned int > foo (struct s *p) > { > if (p->bit) > return (unsigned int) (p->bit); > else > return 0; > } > > > This gets "optimized" at the tree level to this ugly code: > ;; Function foo (foo) > > foo (p) > { > unsigned int D.1979; > > <bb 2>: > if ((BIT_FIELD_REF <*p, 8, 0> & 1) != 0) goto <L0>; else goto <L4>; > > <L4>:; > D.1979 = 0; > goto <bb 5> (<L2>); > > <L0>:; > D.1979 = (unsigned int) p->bit; > > <L2>:; > return D.1979; > > } > > In summary, I don't think we can close this bug just yet. I don't think VRP can optimize anything here as the BIT_FIELD_REF created by optimize_bitfield_compare accesses struct s tail-padding. IMHO this is still very premature optimization done by fold.