On Thu, 16 Sep 2021, Jirui Wu wrote: > Hi all, > > This patch lowers the vld1 and vst1 variants of the > store and load neon builtins functions to gimple. > > The changes in this patch covers: > * Replaces calls to the vld1 and vst1 variants of the builtins > * Uses MEM_REF gimple assignments to generate better code > * Updates test cases to prevent over optimization > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? If OK can it be committed for me, I have no commit rights.
+ new_stmt = gimple_build_assign (gimple_call_lhs (stmt), + fold_build2 (MEM_REF, + TREE_TYPE + (gimple_call_lhs (stmt)), + args[0], build_int_cst + (TREE_TYPE (args[0]), 0))); you are using TBAA info based on the formal argument type that might have pointer conversions stripped. Instead you should use a type based on the specification of the intrinsics (or the builtins). Likewise for the type of the access (mind alignment info there!). Richard. > Thanks, > Jirui > > gcc/ChangeLog: > > * config/aarch64/aarch64-builtins.c > (aarch64_general_gimple_fold_builtin): > lower vld1 and vst1 variants of the neon builtins > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/fmla_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/fmls_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/fmul_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/mla_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/mls_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/mul_intrinsic_1.c: > prevent over optimization > * gcc.target/aarch64/simd/vmul_elem_1.c: > prevent over optimization > * gcc.target/aarch64/vclz.c: > replace macro with function to prevent over optimization > * gcc.target/aarch64/vneg_s.c: > replace macro with function to prevent over optimization > -- Richard Biener <rguent...@suse.de> SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg, Germany; GF: Felix Imendörffer; HRB 36809 (AG Nuernberg)