On Thu, 31 Jul 2025, Tamar Christina wrote:

> > -----Original Message-----
> > From: Richard Biener <rguent...@suse.de>
> > Sent: Thursday, July 31, 2025 12:27 PM
> > To: gcc-patches@gcc.gnu.org
> > Cc: Tamar Christina <tamar.christ...@arm.com>
> > Subject: [PATCH] Add checks for node in aarch64 vector cost modeling
> > 
> > After removing STMT_VINFO_MEMORY_ACCESS_TYPE we now ICE when costing
> > for scalar stmts required in the epilog since the cost model tries
> > to pattern-match gathers (an earlier patch tried to improve this
> > by introducing stmt groups, but that was on hold due to negative
> > feedback).  The following shot-cuts those attempts when node is NULL
> > as that then cannot be a vector stmt.  Another possibility would be
> > to gate on vect_body, or restructure everything.
> > 
> > Note we now ensure that when m_costing_for_scalar node is NULL.
> > 
> > Tested with check-gcc vect.exp with a cross.  OK?
> 
> OK.

Pushed and sorry for the breakage...

Richard.

> Thanks,
> Tamar
> 
> > 
> > Thanks,
> > Richard.
> > 
> >     * config/aarch64/aarch64.cc (aarch64_detect_vector_stmt_subtype):
> >     Check for node before dereferencing.
> >     (aarch64_vector_costs::add_stmt_cost): Likewise.
> > ---
> >  gcc/config/aarch64/aarch64.cc | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> > index a761addc06c..ed37824b6a2 100644
> > --- a/gcc/config/aarch64/aarch64.cc
> > +++ b/gcc/config/aarch64/aarch64.cc
> > @@ -17465,6 +17465,7 @@ aarch64_detect_vector_stmt_subtype (vec_info
> > *vinfo, vect_cost_for_stmt kind,
> >       for each element.  We therefore need to divide the full-instruction
> >       cost by the number of elements in the vector.  */
> >    if (kind == scalar_load
> > +      && node
> >        && sve_costs
> >        && SLP_TREE_MEMORY_ACCESS_TYPE (node) == VMAT_GATHER_SCATTER)
> >      {
> > @@ -17478,6 +17479,7 @@ aarch64_detect_vector_stmt_subtype (vec_info
> > *vinfo, vect_cost_for_stmt kind,
> >    /* Detect cases in which a scalar_store is really storing one element
> >       in a scatter operation.  */
> >    if (kind == scalar_store
> > +      && node
> >        && sve_costs
> >        && SLP_TREE_MEMORY_ACCESS_TYPE (node) == VMAT_GATHER_SCATTER)
> >      return sve_costs->scatter_store_elt_cost;
> > @@ -18005,7 +18007,7 @@ aarch64_vector_costs::add_stmt_cost (int count,
> > vect_cost_for_stmt kind,
> > 
> >        /* Check if we've seen an SVE gather/scatter operation and which 
> > size.  */
> >        if (kind == scalar_load
> > -     && !m_costing_for_scalar
> > +     && node
> >       && vectype
> >       && aarch64_sve_mode_p (TYPE_MODE (vectype))
> >       && SLP_TREE_MEMORY_ACCESS_TYPE (node) ==
> > VMAT_GATHER_SCATTER)
> > --
> > 2.43.0
> 

-- 
Richard Biener <rguent...@suse.de>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)

Reply via email to