https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91603
Bug ID: 91603
Summary: Unaligned access in expand_assignment
Product: gcc
Version: 10.0
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: middle-end
Assignee: unassigned at gcc dot gnu.org
Reporter: bernd.edlinger at hotmail dot de
Target Milestone: ---
Currently this ICEs due to the middle-end sanitization:
$ cat test.c
typedef __simd64_int32_t int32x2_t;
typedef __attribute__((aligned (1))) int32x2_t unalignedvec;
unalignedvec a = {11, 13};
void foo(unalignedvec *);
void test()
{
unalignedvec x = a;
foo (&x);
a = x;
}
$ arm-linux-gnueabihf-gcc -O3 -S test.c
during RTL pass: expand
test.c: In function 'test':
test.c:10:16: internal compiler error: in gen_movv2si, at
config/arm/vec-common.md:30
10 | unalignedvec x = a;
| ^
0x7bb33c gen_movv2si(rtx_def*, rtx_def*)
../../gcc-trunk/gcc/config/arm/vec-common.md:30
0xa4a807 insn_gen_fn::operator()(rtx_def*, rtx_def*) const
../../gcc-trunk/gcc/recog.h:318
0xa4a807 emit_move_insn_1(rtx_def*, rtx_def*)
../../gcc-trunk/gcc/expr.c:3694
0xa4ab94 emit_move_insn(rtx_def*, rtx_def*)
../../gcc-trunk/gcc/expr.c:3790
0xa522bf store_expr(tree_node*, rtx_def*, int, bool, bool)
../../gcc-trunk/gcc/expr.c:5855
0xa52bfd expand_assignment(tree_node*, tree_node*, bool)
../../gcc-trunk/gcc/expr.c:5441
0xa52bfd expand_assignment(tree_node*, tree_node*, bool)
../../gcc-trunk/gcc/expr.c:4982
0x934adf expand_gimple_stmt_1
../../gcc-trunk/gcc/cfgexpand.c:3777
0x934adf expand_gimple_stmt
../../gcc-trunk/gcc/cfgexpand.c:3875
0x93a451 expand_gimple_basic_block
../../gcc-trunk/gcc/cfgexpand.c:5915
0x93c1b6 execute
../../gcc-trunk/gcc/cfgexpand.c:6538
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
But it is actually a genuine wrong code bug:
with the assertions removed, it generates this code:
test:
@ args = 0, pretend = 0, frame = 8
@ frame_needed = 0, uses_anonymous_args = 0
push {r4, r6, r7, lr}
movw r4, #:lower16:.LANCHOR0
movt r4, #:upper16:.LANCHOR0
sub sp, sp, #8
mov r0, sp
vldr d16, [r4] <= unaligned access (will trap)
vstr d16, [sp]
bl foo
ldrd r6, [sp]
uxtb ip, r6
ubfx r2, r6, #16, #8
lsr lr, r6, #24
uxtb r0, r7
lsr r1, r7, #24
strb ip, [r4]
ubfx r3, r7, #16, #8
strb r2, [r4, #2]
ubfx ip, r6, #8, #8
ubfx r2, r7, #8, #8
strb lr, [r4, #3]
strb ip, [r4, #1]
strb r0, [r4, #4]
strb r2, [r4, #5]
strb r3, [r4, #6]
strb r1, [r4, #7]
add sp, sp, #8
@ sp needed
pop {r4, r6, r7, pc}