http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15256

Richard Guenther <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
         AssignedTo|unassigned at gcc dot       |rguenth at gcc dot gnu.org
                   |gnu.org                     |

--- Comment #4 from Richard Guenther <rguenth at gcc dot gnu.org> 2011-05-06 
13:12:37 UTC ---
I also see related missed optimizations when lowering regular bitfield
ops:

  <unnamed-unsigned:8> BF.15;
  <unnamed-unsigned:8> BF.14;
  <unnamed-unsigned:8> BF.13;
  <unnamed-unsigned:8> BF.12;
  <unnamed-unsigned:8> BF.11;
  <unnamed-unsigned:8> BF.10;
  <unnamed-unsigned:32> D.2191;
  <unnamed-unsigned:32> BF.9;

<bb 2>:
  BF.9_2 = MEM[(struct S *)this_1(D)];
  D.2191_3 = BF.9_2 & 4294967292;
  BF.9_4 = D.2191_3 | 1;
  MEM[(struct S *)this_1(D)] = BF.9_4;
  BF.10_5 = MEM[(struct S *)this_1(D)];
  BF.10_6 = BF.10_5 | 4;
  BF.11_8 = BF.10_6 & 247;
  BF.12_10 = BF.11_8 | 16;
  BF.13_12 = BF.12_10 & 223;
  BF.14_14 = BF.13_12 | 64;
  BF.15_16 = BF.14_14 & 127;
  MEM[(struct S *)this_1(D)] = BF.15_16;
  return;

we should be able to optimize the |& chain easily (in forwprop, or in
reassoc by noting that we can associate |s last and &s first).

Reply via email to