Richard Sandiford <[email protected]> writes:
> [...]
> /* Perform a shift right by CRC_SIZE as an extraction of lane 1. */
> machine_mode crc_vmode = aarch64_vq_mode (crc_mode).require ();
> a0 = (crc_size > data_size ? gen_reg_rtx (crc_mode) : operands[0]);
> emit_insn (gen_aarch64_get_lane (crc_vmode, a0,
> gen_lowpart (crc_vmode, clmul_res),
> aarch64_endian_lane_rtx (crc_vmode, 1)));
Sorry, I forgot to say that I'd locally patched:
diff --git a/gcc/config/aarch64/aarch64-simd.md
b/gcc/config/aarch64/aarch64-simd.md
index 816f499e963..af7beecb735 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -4301,7 +4301,7 @@ (define_insn
"*aarch64_get_lane_zero_extend<GPI:mode><VDQQH:mode>"
;; RTL uses GCC vector extension indices throughout so flip only for assembly.
;; Extracting lane zero is split into a simple move when it is between SIMD
;; registers or a store.
-(define_insn_and_split "aarch64_get_lane<mode>"
+(define_insn_and_split "@aarch64_get_lane<mode>"
[(set (match_operand:<VEL> 0 "aarch64_simd_nonimmediate_operand" "=?r, w,
Utv")
(vec_select:<VEL>
(match_operand:VALL_F16 1 "register_operand" "w, w, w")
Richard