https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61320

--- Comment #14 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Richard Biener from comment #12)
> (In reply to Eric Botcazou from comment #11)
> > > So I am testing the patch right now and should be able to send it 
> > > tomorrow.
> > > However, the code already shall already mark the load with the actual
> > > alignment the access is being done with. Therefore it seems to me that
> > > something in the backend fails to split the unaligned load into several
> > > aligned load.
> > 
> > But what would be the point of this round trip exactly?
> 
> I'd say
> 
> Index: tree-ssa-math-opts.c
> ===================================================================
> --- tree-ssa-math-opts.c        (revision 211170)
> +++ tree-ssa-math-opts.c        (working copy)
> @@ -2149,7 +2149,8 @@ bswap_replace (gimple stmt, gimple_stmt_
>        unsigned align;
>  
>        align = get_object_alignment (src);
> -      if (bswap && SLOW_UNALIGNED_ACCESS (TYPE_MODE (load_type), align))
> +      if (align < GET_MODE_ALIGNMENT (TYPE_MODE (load_type))
> +         && SLOW_UNALIGNED_ACCESS (TYPE_MODE (load_type), align))
>         return false;
>  
>        /*  Compute address to load from and cast according to the size
> 
> is obvious (and pre-approved).

obvious as a workaround, that is.

Reply via email to