On Fri, 14 Jun 2024, Richard Biener wrote:
> On Fri, 14 Jun 2024, Richard Sandiford wrote:
>
> > Richard Biener <[email protected]> writes:
> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
> > >
> > >> Richard Biener <[email protected]> writes:
> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
> > >> > from the middle-end. Targets instead (should) implement vcond_mask
> > >> > and vec_cmp{,u,eq} optabs. The PR this change refers to lists
> > >> > possibly affected targets - those implementing these patterns,
> > >> > and in particular it lists mips, sparc and ia64 as targets that
> > >> > most definitely will regress while others might simply remove
> > >> > their vcond{,u,eq} patterns.
> > >> >
> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > >> > I know riscv doesn't implement any of the legacy optabs. But less
> > >> > maintained vector targets might need adjustments.
> > >> >
> > >> > I want to get rid of those optabs for GCC 15. If I don't hear from
> > >> > you I will assume your target is fine.
> > >>
> > >> Great! Thanks for doing this.
> > >>
> > >> Is there a plan for how we should handle vector comparisons that
> > >> have to be done as the inverse of the negated condition? Should
> > >> targets simply not provide vec_cmp for such conditions and leave
> > >> the target-independent code to deal with the fallout? (For a
> > >> standalone comparison, it would invert the result. For a VEC_COND_EXPR
> > >> it would swap the true and false values.)
> > >
> > > I would expect that the ISEL pass which currently deals with finding
> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> > > So how do we deal with this right now? I expect RTL expansion will
> > > do the inverse trick, no?
> >
> > I think in practice (at least for the targets I've worked on),
> > the target's vec_cmp handles the inversion itself. Thus the
> > main optimisation done by targets' vcond patterns is to avoid
> > the inversion (and instead swap the true/false values) when the
> > "opposite" comparison is the native one.
>
> I see. I suppose whether or not vec_cmp is handled is determined
> by a FAIL so it's somewhat difficult to determine this at ISEL time.
I'll also note that we document vec_cmp{,u,eq} as having all zeros,
all ones for the result while vcond_mask might only care for the MSB
(it's documented to work on the result of a pre-computed vector
comparison).
So this eventually asks for targets to work out the optimal sequence
via combine helpers and thus eventually splitters to fixup invalid
compare operators late?
Richard.