https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67701

--- Comment #8 from rguenther at suse dot de <rguenther at suse dot de> ---
On Thu, 24 Sep 2015, ebotcazou at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67701
> 
> --- Comment #7 from Eric Botcazou <ebotcazou at gcc dot gnu.org> ---
> > Yes, AFAIK this was some obscure situation with SSE on x86.  IIRC
> > code doing unaligned scalar accesses (which is ok on x86) but then
> > vectorized using peeling for alignment (which cannot succeed if the
> > element is not naturally aligned) and segfaulting for the emitted
> > aligned move instructions.
> 
> I see, thanks for the insight.
> 
> > Maybe these days the legacy has been cleaned up enough so we can
> > remove that conservative handling again...  I think it also causes
> > us to handle
> > 
> > char c[4];
> > 
> > int main()
> > {
> >   if (!((unsigned long)c & 3))
> >     return *(int *)c;
> >   return c[0];
> > }
> > 
> > too conservatively as we expand
> > 
> >   _5 = MEM[(int *)&c];
> > 
> > and thus lost the flow-sensitive info.
> 
> The problem is that, in order to fix a legitimate issue on x86, the change
> pessimizes the code for strict-alignment platforms, where the said issue
> doesn't exist since there are unaligned accesses in the source code.  And of
> course only for them, since x86 has unaligned load/stores.  So, in the end,
> this is a net loss for strict-alignment platforms.

Agreed.  Looking at how to fix this in get_object_alignment_2 I wonder
if it makes sense to unify this function with get_inner_reference.  The
other choice would be to add a flag to get_inner_reference to stop
at MEM_REF/TARGET_MEM_REFs.

Reply via email to