Bernd Edlinger writes:
 > Hi,
 > 
 > are there any more comments on this?
 > 
 > I would like to apply the patch as is, unless we find a
 > a way to get to a test case, maybe with a cross-compiler,
 > where the MODE_ALIGNMENT is different from MODE_BITSIZE.
 > 
 > Currently, I think that does not happen.

On m68k-linux GET_MODE_ALIGNMENT (SImode) == 16 while
GET_MODE_BITSIZE (SImode) == 32.

I don't know what that means for your patch, just wanted
to inform you that such targets do exist.

/Mikael

 > 
 > Thanks
 > Bernd.
 > 
 > > Date: Tue, 10 Mar 2015 14:40:52 +0100
 > >
 > > Hi Richard and Eric,
 > >
 > > On Mon, 9 Mar 2015 15:30:31, Richard Biener wrote:
 > >>> Reg-tested on x86_64 successfully and ARM is still running.
 > >
 > > ARM completed without regressions meanwhile.
 > >
 > >>>
 > >>> Is it OK for trunk?
 > >>
 > >> Looks ok to me apart from
 > >>
 > >> /* Check for cases of unaligned fields that must be split. */
 > >> - if (bitnum % BITS_PER_UNIT + bitsize> modesize
 > >> - || (STRICT_ALIGNMENT
 > >> - && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize> modesize))
 > >> + if (bitnum % (STRICT_ALIGNMENT ? modesize : BITS_PER_UNIT)
 > >> + + bitsize> modesize
 > >> + || (STRICT_ALIGNMENT && MEM_ALIGN (op0) < modesize))
 > >> return false;
 > >>
 > >> where I'd use GET_MODE_ALIGNMENT (fieldmode) rather than modesize
 > >> (in both places).
 > >>
 > >> Please leave Eric the chance to comment.
 > >>
 > >
 > > Just to clarify a few things here:
 > >
 > > I try to make the checks in strict_volatile_bitfield_p
 > > to be consistent with the strict volatile bit-field code path
 > > that follows if we return true here.
 > >
 > > I would summarize the current implementation of the
 > > strict volatile bit-field code as follows:
 > >
 > > For strict alignment, we access the structure as
 > > if it were an array of fieldmode.  A multiple of modesize
 > > is added to the base address, and one single read and/or
 > > write access must be sufficient to do the job.  The access
 > > must be Ok regarding the target's alignment restrictions.
 > > That does not change, what changed with the previous patch
 > > is a missed optimization with the EP_insv code pattern.
 > >
 > > For non strict alignment, a multiple of modesize is added
 > > to the base address, but if the range [bitnum:bitnum+bitsize-1]
 > > spans two fieldmode words, which should only happen if we
 > > use packed structures, a byte offset is added to the base address.
 > > The byte offset is chosen as small as possible, to not reach beyond
 > > the bit field region.  That is new.  This change is irrelevant for the use 
 > > case
 > > of accessing a device register, but the generated code is more compact.
 > >
 > > Usually we have GET_MODE_ALIGNMENT(fieldmode)==modesize
 > > for all SCALAR_INT_MODE_P(fieldmode).
 > >
 > > The only exceptions are complex numbers, and targets where
 > > ADJUST_ALIGNMENT is used in the modes.def, right?
 > >
 > > The targets that do that are few, and the modes are mostly vector modes.
 > > So I did not find any target where the MODE_ALIGNMENT would make
 > > a difference here.  Therefore I think it is more or less a matter of taste.
 > > But please correct me if I am wrong.
 > >
 > > If there are cases, where MODE_ALIGNMENT<MODE_BITSIZE,
 > > changing the second condition MEM_ALIGN(op0)<modesize to
 > > MEM_ALIGN(op0)<GET_MODE_ALIGNMENT(filedmode) would
 > > still be consistent, and probably be more correct.
 > >
 > > But I think changing the first condition would allow cases where this
 > > assertion in the patch does no longer hold:
 > > gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (fieldmode));
 > >
 > >
 > >
 > > Thanks
 > > Bernd.
 > >
 >                                        
-- 

Reply via email to