http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55882



--- Comment #12 from Richard Biener <rguenth at gcc dot gnu.org> 2013-01-09 
11:35:33 UTC ---

(In reply to comment #10)

> Eric, I am double-checking my patch and I believe all this 'bitpos' handling

> in set_mem_attributes_minus_bitpos (and/or MEM_OFFSET in general) looks

> suspicious.

> 

> /* For a MEM rtx, the offset from the start of MEM_EXPR.  */

> #define MEM_OFFSET(RTX) (get_mem_attrs (RTX)->offset)

> 

> I read from this that the XEXP (RTX, 0) points to MEM_EXPR + MEM_OFFSET

> (if MEM_OFFSET_KNOWN_P, and if !MEM_OFFSET_KNOWN_P we don't know how

> XEXP (RTX, 0) and MEM_EXPR relate).

> 

> Now, in expand_assignment we do

> 

>       tem = get_inner_reference (to, &bitsize, &bitpos, &offset, &mode1,

>                                  &unsignedp, &volatilep, true);

> 

>       if (TREE_CODE (to) == COMPONENT_REF

>           && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))

>         get_bit_range (&bitregion_start, &bitregion_end, to, &bitpos, 
> &offset);

> 

> ...

>           to_rtx = expand_expr (tem, NULL_RTX, VOIDmode, EXPAND_WRITE);

> ...

>   offset it with variable parts from 'offset'

> ...

>               set_mem_attributes_minus_bitpos (to_rtx, to, 0, bitpos);

> 

> but bitpos here is _not_ ontop of 'to' but extracted from 'to' (and

> eventually modified by get_bit_range which may shift things to 'offset').

> 

> The only 'bitpos' that should be passed to set_mem_attributes_minus_bitpos

> is one that reflects - in the bitfield access case - that we actually

> access things at a different position.  But at this point we don't know

> what optimize_bitfield_assignment_op or store_field will eventually do.

> Also MEM_OFFSET is in bytes while I believe 'bitpos' can end up as

> an actual bit position, so

> 

>   /* If we modified OFFSET based on T, then subtract the outstanding

>      bit position offset.  Similarly, increase the size of the accessed

>      object to contain the negative offset.  */

>   if (apply_bitpos)

>     {

>       gcc_assert (attrs.offset_known_p);

>       attrs.offset -= apply_bitpos / BITS_PER_UNIT;

>       if (attrs.size_known_p)

>         attrs.size += apply_bitpos / BITS_PER_UNIT;

>     }

> 

> (whatever this comment means).  I believe my fix is correct following

> the MEM_OFFSET description and guessing at what the argument to

> set_mem_attributes_minus_bitpos means by looking at its use.  But I

> believe that expand_assignment should pass zero as bitpos to

> set_mem_attributes_minus_bitpos (making the only caller that calls this

> function with bitpos != 0 go).

> 

> In this testcase we want to access sth at offset 8 bytes, 0 bits but

> the memory model tells us the bitregion to consider is

> everything from offset 6 to offset 14 so instead of the correct

> initial mem

> 

> (mem/j:HI (plus:SI (reg/f:SI 206)

>         (const_int 8 [0x6])) [0 dmfe[i_1].use_alt_rd_dqs S2 A32])

> 

> (with 4 byte alignemnt!) we create

> 

> (mem/j:BLK (plus:SI (reg/f:SI 206)

>         (const_int 6 [0x6])) [0 dmfe[i_1].use_alt_rd_dqs+-6 S8 A32])

> 

> where the alignment is bogus.

> 

> Thus, given the above general MEM_OFFSET analysis I'd say the following

> (ontop of my previous patch) should fix things more correctly:

> 

> Index: gcc/expr.c

> ===================================================================

> --- gcc/expr.c  (revision 195014)

> +++ gcc/expr.c  (working copy)

> @@ -4669,7 +4669,7 @@ expand_assignment (tree to, tree from, b

>        || TREE_CODE (TREE_TYPE (to)) == ARRAY_TYPE)

>      {

>        enum machine_mode mode1;

> -      HOST_WIDE_INT bitsize, bitpos;

> +      HOST_WIDE_INT bitsize, bitpos, bitpos_adj;

>        unsigned HOST_WIDE_INT bitregion_start = 0;

>        unsigned HOST_WIDE_INT bitregion_end = 0;

>        tree offset;

> @@ -4683,9 +4683,15 @@ expand_assignment (tree to, tree from, b

>        tem = get_inner_reference (to, &bitsize, &bitpos, &offset, &mode1,

>                                  &unsignedp, &volatilep, true);

> 

> +      bitpos_adj = 0;

>        if (TREE_CODE (to) == COMPONENT_REF

>           && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))

> -       get_bit_range (&bitregion_start, &bitregion_end, to, &bitpos, 
> &offset);

> +       {

> +         HOST_WIDE_INT orig_bitpos = bitpos;

> +         get_bit_range (&bitregion_start, &bitregion_end,

> +                        to, &bitpos, &offset);

> +         bitpos_adj = orig_bitpos - bitpos;

> +       }

> 

>        /* If we are going to use store_bit_field and extract_bit_field,

>          make sure to_rtx will be safe for multiple use.  */

> @@ -4839,7 +4845,7 @@ expand_assignment (tree to, tree from, b

>                  DECL_RTX of the parent struct.  Don't munge it.  */

>               to_rtx = shallow_copy_rtx (to_rtx);

> 

> -             set_mem_attributes_minus_bitpos (to_rtx, to, 0, bitpos);

> +             set_mem_attributes_minus_bitpos (to_rtx, to, 0, bitpos_adj);

> 

>               /* Deal with volatile and readonly fields.  The former is only

>                  done for MEM.  Also set MEM_KEEP_ALIAS_SET_P if needed.  */

> 

> that is, we always pass zero to set_mem_attributes_minus_bitpos unless

> the original offset was modified.

> 

> Hmm, but we really pass in a MEM that only covers tem + offset and not

> bitpos.  But MEM_EXPR does not reflect that - we pass in the original

> 'to', not sth like *(&tem + offset).  Maybe in very distant times

> all the stripping of component refs from MEM_EXPR compensated for that?

> 

> What a mess ...



Forget all that - it seems to be correct.  With the fix from

comment #11.  Apart from the bit-align issue.

Reply via email to