Replying to old thread, for full context see
http://thread.gmane.org/gmane.linux.ports.arm.kernel/180226/focus=197914

On Tue, Nov 06, 2012 at 11:02:27AM +1300, Michael Hope wrote:
> On 6 November 2012 02:48, Rob Herring <robherri...@gmail.com> wrote:
> >
> > I tried adding -munaligned-accesses on a v6 build and still get byte
> > accesses rather than unaligned word accesses. So this does seem to be a
> > v7 only issue based on what gcc will currently produce. Copying Michael
> > Hope who can hopefully provide some insight on why v6 unaligned accesses
> > are not enabled.
> 
> This looks like a bug.  Unaligned access is enabled for armv6 but
> seems to only take effect for cores with Thumb-2.  Here's a test case
> both with unaligned field access and unaligned block copy:
> 
> struct foo
> {
>   char a;
>   int b;
>   struct
>   {
>     int x[3];
>   } c;
> } __attribute__((packed));
> 
> int get_field(struct foo *p)
> {
>   return p->b;
> }
> 
> int copy_block(struct foo *p, struct foo *q)
> {
>   p->c = q->c;
> }
> 
> With -march=armv7-a you get the correct:
> 
> bar:
>       ldr     r0, [r0, #1]    @ unaligned     @ 11    unaligned_loadsi/2      
> [length = 4]
>       bx      lr      @ 21    *arm_return     [length = 12]
> 
> baz:
>       str     r4, [sp, #-4]!  @ 25    *push_multi     [length = 4]
>       mov     r2, r0  @ 2     *arm_movsi_vfp/1        [length = 4]
>       ldr     r4, [r1, #5]!   @ unaligned     @ 9     unaligned_loadsi/2      
> [length = 4]
>       ldr     ip, [r1, #4]    @ unaligned     @ 10    unaligned_loadsi/2      
> [length = 4]
>       ldr     r1, [r1, #8]    @ unaligned     @ 11    unaligned_loadsi/2      
> [length = 4]
>       str     r4, [r2, #5]    @ unaligned     @ 12    unaligned_storesi/2     
> [length = 4]
>       str     ip, [r2, #9]    @ unaligned     @ 13    unaligned_storesi/2     
> [length = 4]
>       str     r1, [r2, #13]   @ unaligned     @ 14    unaligned_storesi/2     
> [length = 4]
>       ldmfd   sp!, {r4}
>       bx      lr
> 
> With -march=armv6 you get a byte-by-byte field access and a correct
> unaligned block copy:
> 
> bar:
>       ldrb    r1, [r0, #2]    @ zero_extendqisi2
>       ldrb    r3, [r0, #1]    @ zero_extendqisi2
>       ldrb    r2, [r0, #3]    @ zero_extendqisi2
>       ldrb    r0, [r0, #4]    @ zero_extendqisi2
>       orr     r3, r3, r1, asl #8
>       orr     r3, r3, r2, asl #16
>       orr     r0, r3, r0, asl #24
>       bx      lr
> 
> baz:
>       str     r4, [sp, #-4]!
>       mov     r2, r0
>       ldr     r4, [r1, #5]!   @ unaligned
>       ldr     ip, [r1, #4]    @ unaligned
>       ldr     r1, [r1, #8]    @ unaligned
>       str     r4, [r2, #5]    @ unaligned
>       str     ip, [r2, #9]    @ unaligned
>       str     r1, [r2, #13]   @ unaligned
>       ldmfd   sp!, {r4}
>       bx      lr
> 
> readelf -A shows that the compiler planned to use unaligned access in
> both.  My suspicion is that GCC is using the extv pattern to extract
> the field from memory, and that pattern is only enabled for Thumb-2
> capable cores.
> 
> I've logged PR55218.  We'll discuss it at our next meeting.

Just tried with gcc-linaro-4.7-2013.01 (gcc-4.7.3 20130102 (prerelease)),
the issue is still unfixed.  Do you have any idea how to fix it?
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55218

Thanks,
Johannes

_______________________________________________
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-toolchain

Reply via email to