Hi!
> -----Original Message-----
> From: Yangfei (Felix)
> Sent: Monday, March 30, 2020 5:28 PM
> To: [email protected]
> Cc: '[email protected]' <[email protected]>
> Subject: [PATCH] ICE: in vectorizable_load, at tree-vect-stmts.c:9173
>
> Hi,
>
> New bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94398
>
> With -mstrict-align, aarch64_builtin_support_vector_misalignment will returns
> false when misalignment factor is unknown at compile time.
> Then vect_supportable_dr_alignment returns dr_unaligned_unsupported,
> which triggers the ICE. I have pasted the call trace on the bug report.
>
> vect_supportable_dr_alignment is expected to return either dr_aligned or
> dr_unaligned_supported for masked operations.
> But it seems that this function only catches internal_fn IFN_MASK_LOAD &
> IFN_MASK_STORE.
> We are emitting a mask gather load here for this test case.
> As backends have their own vector misalignment support policy, I am supposing
> this should be better handled in the auto-vect shared code.
>
I can only reply to comment on the bug here as my account got locked by the
bugzilla system for now.
The way Richard (rsandifo) suggests also works for me and I agree it should be
more consistent and better for compile time.
One question here: when will a IFN_MASK_LOAD/IFN_MASK_STORE be passed to
vect_supportable_dr_alignment?
New patch:
diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c
index 12beef6..2825023 100644
--- a/gcc/tree-vect-stmts.c
+++ b/gcc/tree-vect-stmts.c
@@ -8051,8 +8051,15 @@ vectorizable_store (stmt_vec_info stmt_info,
gimple_stmt_iterator *gsi,
auto_vec<tree> dr_chain (group_size);
oprnds.create (group_size);
- alignment_support_scheme
- = vect_supportable_dr_alignment (first_dr_info, false);
+ /* Strided accesses perform only component accesses, alignment
+ is irrelevant for them. */
+ if (STMT_VINFO_STRIDED_P (first_dr_info->stmt)
+ && !STMT_VINFO_GROUPED_ACCESS (first_dr_info->stmt))
+ alignment_support_scheme = dr_unaligned_supported;
+ else
+ alignment_support_scheme
+ = vect_supportable_dr_alignment (first_dr_info, false);
+
gcc_assert (alignment_support_scheme);
vec_loop_masks *loop_masks
= (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -9168,8 +9175,15 @@ vectorizable_load (stmt_vec_info stmt_info,
gimple_stmt_iterator *gsi,
ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
}
- alignment_support_scheme
- = vect_supportable_dr_alignment (first_dr_info, false);
+ /* Strided accesses perform only component accesses, alignment
+ is irrelevant for them. */
+ if (STMT_VINFO_STRIDED_P (first_dr_info->stmt)
+ && !STMT_VINFO_GROUPED_ACCESS (first_dr_info->stmt))
+ alignment_support_scheme = dr_unaligned_supported;
+ else
+ alignment_support_scheme
+ = vect_supportable_dr_alignment (first_dr_info, false);
+
gcc_assert (alignment_support_scheme);
vec_loop_masks *loop_masks
= (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)