http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48973

--- Comment #8 from Jakub Jelinek <jakub at gcc dot gnu.org> 2011-05-12 
10:31:42 UTC ---
(In reply to comment #6)
> "Fixed" with bitfield lowering where we expand from
> 
>   v.0_1 = v;
>   BF.1_3 = MEM[(struct S *)&s];
>   D.2700_4 = BF.1_3 & -2;
>   D.2702_6 = v.0_1 < 0;
>   BF.1_7 = D.2702_6 | D.2700_4;
>   MEM[(struct S *)&s] = BF.1_7;
>   D.2693_9 = (<unnamed-signed:1>) BF.1_7;
>   D.2694_10 = (unsigned int) D.2693_9;
>   if (D.2694_10 != 4294967295)
> 
> similar to what Jakub proposed to do manually.

Well, if bitfield lowering does this and nothing cleans it up, there is room
for improvement.  It would be sad if it couldn't be optimized already at the
tree level back to:
   v.0_1 = v;
   BF.1_3 = MEM[(struct S *)&s];
   D.2700_4 = BF.1_3 & -2;
   D.2702_6 = v.0_1 < 0;
   BF.1_7 = D.2702_6 | D.2700_4;
   MEM[(struct S *)&s] = BF.1_7;
   D.2693_9 = (<unnamed-signed:1>) D.2702_6;  // change here.  Assuming D.2702
                                              // is either > 1 precision, or
                                              // unsigned
   D.2694_10 = (unsigned int) D.2693_9;
   if (D.2694_10 != 4294967295)

Reply via email to