On arm* and aarch64* targets, we can vectorise the second of the main
loops using SLP, not just the third. As the comments say, whether this
is supported depends on a very specific permutation, so it seemed better
to use direct target selectors.
Tested on aarch64-linux-gnu (with and without SVE), arm-linux-gnueabihf
and x86_64-linux-gnu. OK to install?
Richard
gcc/testsuite/
* gcc.dg/vect/slp-21.c: Expect 4 SLP instances to be vectorized
on arm* and aarch64* targets.
---
gcc/testsuite/gcc.dg/vect/slp-21.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/gcc/testsuite/gcc.dg/vect/slp-21.c
b/gcc/testsuite/gcc.dg/vect/slp-21.c
index 1f8c82e8ba8..117d65c5ddb 100644
--- a/gcc/testsuite/gcc.dg/vect/slp-21.c
+++ b/gcc/testsuite/gcc.dg/vect/slp-21.c
@@ -201,6 +201,16 @@ int main (void)
/* { dg-final { scan-tree-dump-times "vectorized 4 loops" 1 "vect" { target {
vect_strided4 || vect_extract_even_odd } } } } */
/* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" { target
{ ! { vect_strided4 || vect_extract_even_odd } } } } } */
-/* { dg-final { scan-tree-dump-times "vectorizing stmts using SLP" 2 "vect" {
target vect_strided4 } } } */
+/* Some targets can vectorize the second of the three main loops using
+ hybrid SLP. For 128-bit vectors, the required 4->3 permutations are:
+
+ { 0, 1, 2, 4, 5, 6, 8, 9 }
+ { 2, 4, 5, 6, 8, 9, 10, 12 }
+ { 5, 6, 8, 9, 10, 12, 13, 14 }
+
+ Not all vect_perm targets support that, and it's a bit too specific to have
+ its own effective-target selector, so we just test targets directly. */
+/* { dg-final { scan-tree-dump-times "vectorizing stmts using SLP" 4 "vect" {
target { aarch64*-*-* arm*-*-* } } } } */
+/* { dg-final { scan-tree-dump-times "vectorizing stmts using SLP" 2 "vect" {
target { vect_strided4 && { ! { aarch64*-*-* arm*-*-* } } } } } } */
/* { dg-final { scan-tree-dump-times "vectorizing stmts using SLP" 0 "vect" {
target { ! { vect_strided4 } } } } } */