As mentioned in http://gcc.gnu.org/ml/gcc/2010-01/msg00033.html the following
testcase is not optimized very well on PowerPC with -O2 -m32, while it works
well say on x86_64 or PowerPC -O2 -m64:
union U
{
  unsigned u;
  struct
  {
    unsigned b1:2;
    unsigned:8;
    unsigned b2:2;
    unsigned b3:2;
    unsigned:18;
  } b;
};

unsigned
test (void)
{
  union U u;
  u.u = 0;
  u.b.b1 = 2;
  u.b.b2 = 3;
  u.b.b3 = 1;
  return u.u;
}

The problem is that the bitfields aren't converted into and/or operations
during tree optimizations and at the RTL level expander emits them using insv
insns (i.e. ZERO_EXTRACT on lhs).  While for -m64 (and targets that don't have
insv) CSE then manages to optimize the and/or operations with constant
arguments down to a return of a constant, nothing optimizes the lhs
ZERO_EXTRACT.

The gcc ml mail mentioned above claims it is a regression from 2.95, I've just
tried 3.4 and it didn't optimize it either, but I don't have 2.95 around to
test this.


-- 
           Summary: [4.3/4.4/4.5 Regression] ZERO_EXTRACT on lhs never
                    optimized out
           Product: gcc
           Version: 4.5.0
            Status: UNCONFIRMED
          Severity: enhancement
          Priority: P3
         Component: rtl-optimization
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: jakub at gcc dot gnu dot org
GCC target triplet: powerpc-linux


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42699

Reply via email to