When we determine overrun we have to consider VMAT_CONTIGUOUS_REVERSE
the same as VMAT_CONTIGUOUS.

Bootstrapped on x86_64-unknown-linux-gnu, testing in progress.

        PR tree-optimization/115741
        * tree-vect-stmts.cc (get_group_load_store_type): Also
        handle VMAT_CONTIGUOUS_REVERSE when determining overrun.
---
 gcc/tree-vect-stmts.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index f279aeef0cb..b12b6ada029 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -2099,7 +2099,8 @@ get_group_load_store_type (vec_info *vinfo, stmt_vec_info 
stmt_info,
             If there is a combination of the access not covering the full
             vector and a gap recorded then we may need to peel twice.  */
          if (loop_vinfo
-             && *memory_access_type == VMAT_CONTIGUOUS
+             && (*memory_access_type == VMAT_CONTIGUOUS
+                 || *memory_access_type == VMAT_CONTIGUOUS_REVERSE)
              && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()
              && !multiple_p (group_size * LOOP_VINFO_VECT_FACTOR (loop_vinfo),
                              nunits))
-- 
2.35.3

Reply via email to