http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15596

Richard Guenther <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|WAITING                     |ASSIGNED
         AssignedTo|unassigned at gcc dot       |rguenth at gcc dot gnu.org
                   |gnu.org                     |

--- Comment #19 from Richard Guenther <rguenth at gcc dot gnu.org> 2011-05-06 
13:18:31 UTC ---
Even with bitfield accesses lowered at the tree level we end up with

<bb 2>:
  D.1736_2 = (<unnamed-signed:20>) s_1(D);
  BF.0_3 = MEM[(struct bitstr *)&<retval>];
  D.1741_4 = BF.0_3 & -1048576;
  D.1742_5 = (<unnamed-unsigned:20>) D.1736_2;
  D.1743_6 = (int) D.1742_5;
  BF.0_7 = D.1741_4 | 1048576;
  BF.1_9 = BF.0_7 | D.1743_6;
  D.1737_13 = (signed char) l_12(D);
  D.1738_14 = (<unnamed-signed:1>) D.1737_13;
  D.1747_16 = BF.1_9 & -6291457;
  D.1748_17 = (<unnamed-unsigned:1>) D.1738_14;
  D.1749_18 = (int) D.1748_17;
  D.1750_19 = D.1749_18 << 22;
  BF.3_20 = D.1750_19 | D.1747_16;
  MEM[(struct bitstr *)&<retval>] = BF.3_20;
  return <retval>;

there's some optimization possibilities if we can recognize the truncating
and extending conversions as bit manipulations.

Reply via email to