> -----Original Message-----
> From: Richard Biener <richard.guent...@gmail.com>
> Sent: Friday, May 10, 2024 2:07 PM
> To: Richard Biener <rguent...@suse.de>
> Cc: gcc-patches@gcc.gnu.org
> Subject: Re: [PATCH] Allow patterns in SLP reductions
> 
> On Fri, Mar 1, 2024 at 10:21 AM Richard Biener <rguent...@suse.de> wrote:
> >
> > The following removes the over-broad rejection of patterns for SLP
> > reductions which is done by removing them from LOOP_VINFO_REDUCTIONS
> > during pattern detection.  That's also insufficient in case the
> > pattern only appears on the reduction path.  Instead this implements
> > the proper correctness check in vectorizable_reduction and guides
> > SLP discovery to heuristically avoid forming later invalid groups.
> >
> > I also couldn't find any testcase that FAILs when allowing the SLP
> > reductions to form so I've added one.
> >
> > I came across this for single-lane SLP reductions with the all-SLP
> > work where we rely on patterns to properly vectorize COND_EXPR
> > reductions.
> >
> > Bootstrapped and tested on x86_64-unknown-linux-gnu, queued for stage1.
> 
> Re-bootstrapped/tested, r15-361-g52d4691294c847

Awesome!

Does this now allow us to write new reductions using patterns? i.e. widening 
reductions?

Cheers,
Tamar
> 
> Richard.
> 
> > Richard.
> >
> >         * tree-vect-patterns.cc (vect_pattern_recog_1): Do not
> >         remove reductions involving patterns.
> >         * tree-vect-loop.cc (vectorizable_reduction): Reject SLP
> >         reduction groups with multiple lane-reducing reductions.
> >         * tree-vect-slp.cc (vect_analyze_slp_instance): When discovering
> >         SLP reduction groups avoid including lane-reducing ones.
> >
> >         * gcc.dg/vect/vect-reduc-sad-9.c: New testcase.
> > ---
> >  gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c | 68 ++++++++++++++++++++
> >  gcc/tree-vect-loop.cc                        | 15 +++++
> >  gcc/tree-vect-patterns.cc                    | 13 ----
> >  gcc/tree-vect-slp.cc                         | 26 +++++---
> >  4 files changed, 101 insertions(+), 21 deletions(-)
> >  create mode 100644 gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> >
> > diff --git a/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> b/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > new file mode 100644
> > index 00000000000..3c6af4510f4
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > @@ -0,0 +1,68 @@
> > +/* Disabling epilogues until we find a better way to deal with scans.  */
> > +/* { dg-additional-options "--param vect-epilogues-nomask=0" } */
> > +/* { dg-additional-options "-msse4.2" { target { x86_64-*-* i?86-*-* } } } 
> > */
> > +/* { dg-require-effective-target vect_usad_char } */
> > +
> > +#include <stdarg.h>
> > +#include "tree-vect.h"
> > +
> > +#define N 64
> > +
> > +unsigned char X[N] __attribute__ ((__aligned__(__BIGGEST_ALIGNMENT__)));
> > +unsigned char Y[N] __attribute__ ((__aligned__(__BIGGEST_ALIGNMENT__)));
> > +int abs (int);
> > +
> > +/* Sum of absolute differences between arrays of unsigned char types.
> > +   Detected as a sad pattern.
> > +   Vectorized on targets that support sad for unsigned chars.  */
> > +
> > +__attribute__ ((noinline)) int
> > +foo (int len, int *res2)
> > +{
> > +  int i;
> > +  int result = 0;
> > +  int result2 = 0;
> > +
> > +  for (i = 0; i < len; i++)
> > +    {
> > +      /* Make sure we are not using an SLP reduction for this.  */
> > +      result += abs (X[2*i] - Y[2*i]);
> > +      result2 += abs (X[2*i + 1] - Y[2*i + 1]);
> > +    }
> > +
> > +  *res2 = result2;
> > +  return result;
> > +}
> > +
> > +
> > +int
> > +main (void)
> > +{
> > +  int i;
> > +  int sad;
> > +
> > +  check_vect ();
> > +
> > +  for (i = 0; i < N/2; i++)
> > +    {
> > +      X[2*i] = i;
> > +      Y[2*i] = N/2 - i;
> > +      X[2*i+1] = i;
> > +      Y[2*i+1] = 0;
> > +      __asm__ volatile ("");
> > +    }
> > +
> > +
> > +  int sad2;
> > +  sad = foo (N/2, &sad2);
> > +  if (sad != (N/2)*(N/4))
> > +    abort ();
> > +  if (sad2 != (N/2-1)*(N/2)/2)
> > +    abort ();
> > +
> > +  return 0;
> > +}
> > +
> > +/* { dg-final { scan-tree-dump "vect_recog_sad_pattern: detected" "vect" } 
> > } */
> > +/* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
> > +
> > diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> > index 35f1f8c7d42..13dcdba403a 100644
> > --- a/gcc/tree-vect-loop.cc
> > +++ b/gcc/tree-vect-loop.cc
> > @@ -7703,6 +7703,21 @@ vectorizable_reduction (loop_vec_info loop_vinfo,
> >        return false;
> >      }
> >
> > +  /* Lane-reducing ops also never can be used in a SLP reduction group
> > +     since we'll mix lanes belonging to different reductions.  But it's
> > +     OK to use them in a reduction chain or when the reduction group
> > +     has just one element.  */
> > +  if (lane_reduc_code_p
> > +      && slp_node
> > +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info)
> > +      && SLP_TREE_LANES (slp_node) > 1)
> > +    {
> > +      if (dump_enabled_p ())
> > +       dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> > +                        "lane-reducing reduction in reduction group.\n");
> > +      return false;
> > +    }
> > +
> >    /* All uses but the last are expected to be defined in the loop.
> >       The last use is the reduction variable.  In case of nested cycle this
> >       assumption is not true: we use reduc_index to record the index of the
> > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> > index d562f57920f..fe1ffba8688 100644
> > --- a/gcc/tree-vect-patterns.cc
> > +++ b/gcc/tree-vect-patterns.cc
> > @@ -7172,7 +7172,6 @@ vect_pattern_recog_1 (vec_info *vinfo,
> >                       vect_recog_func *recog_func, stmt_vec_info stmt_info)
> >  {
> >    gimple *pattern_stmt;
> > -  loop_vec_info loop_vinfo;
> >    tree pattern_vectype;
> >
> >    /* If this statement has already been replaced with pattern statements,
> > @@ -7198,8 +7197,6 @@ vect_pattern_recog_1 (vec_info *vinfo,
> >        return;
> >      }
> >
> > -  loop_vinfo = dyn_cast <loop_vec_info> (vinfo);
> > -
> >    /* Found a vectorizable pattern.  */
> >    if (dump_enabled_p ())
> >      dump_printf_loc (MSG_NOTE, vect_location,
> > @@ -7208,16 +7205,6 @@ vect_pattern_recog_1 (vec_info *vinfo,
> >
> >    /* Mark the stmts that are involved in the pattern. */
> >    vect_mark_pattern_stmts (vinfo, stmt_info, pattern_stmt, 
> > pattern_vectype);
> > -
> > -  /* Patterns cannot be vectorized using SLP, because they change the 
> > order of
> > -     computation.  */
> > -  if (loop_vinfo)
> > -    {
> > -      unsigned ix, ix2;
> > -      stmt_vec_info *elem_ptr;
> > -      VEC_ORDERED_REMOVE_IF (LOOP_VINFO_REDUCTIONS (loop_vinfo), ix,
> ix2,
> > -                            elem_ptr, *elem_ptr == stmt_info);
> > -    }
> >  }
> >
> >
> > diff --git a/gcc/tree-vect-slp.cc b/gcc/tree-vect-slp.cc
> > index dabd8407aaf..d9961945c1c 100644
> > --- a/gcc/tree-vect-slp.cc
> > +++ b/gcc/tree-vect-slp.cc
> > @@ -3597,14 +3597,24 @@ vect_analyze_slp_instance (vec_info *vinfo,
> >         = as_a <loop_vec_info> (vinfo)->reductions;
> >        scalar_stmts.create (reductions.length ());
> >        for (i = 0; reductions.iterate (i, &next_info); i++)
> > -       if ((STMT_VINFO_RELEVANT_P (next_info)
> > -            || STMT_VINFO_LIVE_P (next_info))
> > -           /* ???  Make sure we didn't skip a conversion around a reduction
> > -              path.  In that case we'd have to reverse engineer that 
> > conversion
> > -              stmt following the chain using reduc_idx and from the PHI
> > -              using reduc_def.  */
> > -           && STMT_VINFO_DEF_TYPE (next_info) == vect_reduction_def)
> > -         scalar_stmts.quick_push (next_info);
> > +       {
> > +         gassign *g;
> > +         next_info = vect_stmt_to_vectorize (next_info);
> > +         if ((STMT_VINFO_RELEVANT_P (next_info)
> > +              || STMT_VINFO_LIVE_P (next_info))
> > +             /* ???  Make sure we didn't skip a conversion around a 
> > reduction
> > +                path.  In that case we'd have to reverse engineer that
> > +                conversion stmt following the chain using reduc_idx and 
> > from
> > +                the PHI using reduc_def.  */
> > +             && STMT_VINFO_DEF_TYPE (next_info) == vect_reduction_def
> > +             /* Do not discover SLP reductions for lane-reducing ops, that
> > +                will fail later.  */
> > +             && (!(g = dyn_cast <gassign *> (STMT_VINFO_STMT (next_info)))
> > +                 || (gimple_assign_rhs_code (g) != DOT_PROD_EXPR
> > +                     && gimple_assign_rhs_code (g) != WIDEN_SUM_EXPR
> > +                     && gimple_assign_rhs_code (g) != SAD_EXPR)))
> > +           scalar_stmts.quick_push (next_info);
> > +       }
> >        /* If less than two were relevant/live there's nothing to SLP.  */
> >        if (scalar_stmts.length () < 2)
> >         return false;
> > --
> > 2.35.3

Reply via email to