While vectorizable_store was already checking alignment requirement
of the stores and fall back to elementwise accesses if not honored
the vectorizable_load path wasn't doing this.  After the previous
change to disregard alignment checking for VMAT_STRIDED_SLP in
get_group_load_store_type this now tripped on power.

Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.

        PR tree-optimization/117720
        * tree-vect-stmts.cc (vectorizable_load): For VMAT_STRIDED_SLP
        verify the choosen load type is OK with regard to alignment.
---
 gcc/tree-vect-stmts.cc | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 8700d1787b4..271c6da2a25 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -10650,9 +10650,19 @@ vectorizable_load (vec_info *vinfo,
             of it.  */
          if (n == const_nunits)
            {
-             nloads = 1;
-             lnel = const_nunits;
-             ltype = vectype;
+             int mis_align = dr_misalignment (first_dr_info, vectype);
+             dr_alignment_support dr_align
+               = vect_supportable_dr_alignment (vinfo, dr_info, vectype,
+                                                mis_align);
+             if (dr_align == dr_aligned
+                 || dr_align == dr_unaligned_supported)
+               {
+                 nloads = 1;
+                 lnel = const_nunits;
+                 ltype = vectype;
+                 alignment_support_scheme = dr_align;
+                 misalignment = mis_align;
+               }
            }
          /* Else use the biggest vector we can load the group without
             accessing excess elements.  */
-- 
2.43.0

Reply via email to