https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67630
--- Comment #4 from Uroš Bizjak <ubizjak at gmail dot com> ---
(In reply to H.J. Lu from comment #2)
> Created attachment 36349 [details]
> A patch
@@ -867,10 +867,12 @@
case MODE_V16SF:
case MODE_V8SF:
case MODE_V4SF:
- if (TARGET_AVX
- && (misaligned_operand (operands[0], <MODE>mode)
- || misaligned_operand (operands[1], <MODE>mode)))
- return "vmovups\t{%1, %0|%0, %1}";
+ /* We must handle SSE since ix86_emit_save_reg_using_mov
+ generates the normal *mov<mode>_internal pattern for
+ interrupt handler. */
+ if (misaligned_operand (operands[0], <MODE>mode)
+ || misaligned_operand (operands[1], <MODE>mode))
+ return "%vmovups\t{%1, %0|%0, %1}";
else
return "%vmovaps\t{%1, %0|%0, %1}";
You should use
if ((TARGET_AVX || cfun->machine->is_interrupt)
&& (misaligned_operand (operands[0], <MODE>mode)
|| misaligned_operand (operands[1], <MODE>mode)))
return "%vmovups\t{%1, %0|%0, %1}";
Your patch gives legacy SSE targets ability to load/store unaligned operands
outside interrupt handler for no reason. Legacy SSE is different to AVX, the
latter also allows unaligned 16byte operands in arithmetic/logic VEX-prefixed
insns. This is the reason we have to relax alignment requirements on V4SFmode
load/stores for AVX targets.