http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18041

--- Comment #5 from Richard Guenther <rguenth at gcc dot gnu.org> 2011-05-10 
11:38:50 UTC ---
With a patch I have we now optimize at the tree level to

<bb 2>:
  D.2686_2 = b_1(D)->bit0;
  D.2688_4 = b_1(D)->bit1;
  D.2693_10 = D.2688_4 ^ D.2686_2;
  b_1(D)->bit0 = D.2693_10;
  return;

and with bitfield lowering applied to

<bb 2>:
  BF.0_2 = MEM[(struct B *)b_1(D)];
  D.2694_6 = BF.0_2 >> 1;
  D.2701_18 = D.2694_6 ^ BF.0_2;
  D.2696_12 = BF.0_2 & 4294967294;
  D.2697_13 = D.2701_18 & 1;
  BF.2_14 = D.2697_13 | D.2696_12;
  MEM[(struct B *)b_1(D)] = BF.2_14;
  return;

Reply via email to