For this testcase we ended up generating the invalid rtl:
(insn 10 9 11 2 (set (reg:VNx16BI 105)
(and:VNx16BI (xor:VNx16BI (reg:VNx8BI 103)
(reg:VNx16BI 104))
(reg:VNx16BI 104))) "/tmp/bar.c":9:12 -1
(nil))
Fixed by taking the VNx16BI lowpart. It's safe to do that here because
the gp (r104) masks out the extra odd-indexed bits.
Tested on aarch64-linux-gnu and aarch64_be-elf, pushed.
Richard
2020-04-16 Richard Sandiford <[email protected]>
gcc/
PR target/94606
* config/aarch64/aarch64.c (aarch64_expand_sve_const_pred_eor): Take
the VNx16BI lowpart of the recursively-generated constant.
gcc/testsuite/
PR target/94606
* gcc.dg/vect/pr94606.c: New test.
---
gcc/config/aarch64/aarch64.c | 1 +
gcc/testsuite/gcc.dg/vect/pr94606.c | 13 +++++++++++++
2 files changed, 14 insertions(+)
create mode 100644 gcc/testsuite/gcc.dg/vect/pr94606.c
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 4af562a81ea..d0a41c286cd 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -4742,6 +4742,7 @@ aarch64_expand_sve_const_pred_eor (rtx target,
rtx_vector_builder &builder,
/* EOR the result with an ELT_SIZE PTRUE. */
rtx mask = aarch64_ptrue_all (elt_size);
mask = force_reg (VNx16BImode, mask);
+ inv = gen_lowpart (VNx16BImode, inv);
target = aarch64_target_reg (target, VNx16BImode);
emit_insn (gen_aarch64_pred_z (XOR, VNx16BImode, target, mask, inv, mask));
return target;
diff --git a/gcc/testsuite/gcc.dg/vect/pr94606.c
b/gcc/testsuite/gcc.dg/vect/pr94606.c
new file mode 100644
index 00000000000..f0e7c4cd0e8
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/pr94606.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-march=armv8.2-a+sve -msve-vector-bits=256" {
target aarch64*-*-* } } */
+
+const short mask[] = { 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 1, 1, 1, 1, 1 };
+
+int
+foo (short *restrict x, short *restrict y)
+{
+ for (int i = 0; i < 16; ++i)
+ if (mask[i])
+ x[i] += y[i];
+}