On Wed, 19 Jul 2023, Jakub Jelinek wrote:

> On Tue, Jul 18, 2023 at 01:25:45PM +0200, Richard Biener wrote:
> > 
> >     PR middle-end/61747
> >     * internal-fn.cc (expand_vec_cond_optab_fn): When the
> >     value operands are equal to the original comparison operands
> >     preserve that equality by re-using the comparison expansion.
> >     * optabs.cc (emit_conditional_move): When the value operands
> >     are equal to the comparison operands and would be forced to
> >     a register by prepare_cmp_insn do so earlier, preserving the
> >     equality.
> > 
> >     * g++.target/i386/pr61747.C: New testcase.
> > ---
> >  gcc/internal-fn.cc                      | 17 ++++++++--
> >  gcc/optabs.cc                           | 32 ++++++++++++++++++-
> >  gcc/testsuite/g++.target/i386/pr61747.C | 42 +++++++++++++++++++++++++
> >  3 files changed, 88 insertions(+), 3 deletions(-)
> >  create mode 100644 gcc/testsuite/g++.target/i386/pr61747.C
> > 
> > diff --git a/gcc/internal-fn.cc b/gcc/internal-fn.cc
> > index e698f0bffc7..c83c3921792 100644
> > --- a/gcc/internal-fn.cc
> > +++ b/gcc/internal-fn.cc
> > @@ -3019,8 +3019,21 @@ expand_vec_cond_optab_fn (internal_fn, gcall *stmt, 
> > convert_optab optab)
> >    icode = convert_optab_handler (optab, mode, cmp_op_mode);
> >    rtx comparison
> >      = vector_compare_rtx (VOIDmode, tcode, op0a, op0b, unsignedp, icode, 
> > 4);
> > -  rtx rtx_op1 = expand_normal (op1);
> > -  rtx rtx_op2 = expand_normal (op2);
> > +  /* vector_compare_rtx legitimizes operands, preserve equality when
> > +     expanding op1/op2.  */
> > +  rtx rtx_op1, rtx_op2;
> > +  if (operand_equal_p (op1, op0a))
> > +    rtx_op1 = XEXP (comparison, 0);
> > +  else if (operand_equal_p (op1, op0b))
> > +    rtx_op1 = XEXP (comparison, 1);
> > +  else
> > +    rtx_op1 = expand_normal (op1);
> > +  if (operand_equal_p (op2, op0a))
> > +    rtx_op2 = XEXP (comparison, 0);
> > +  else if (operand_equal_p (op2, op0b))
> > +    rtx_op2 = XEXP (comparison, 1);
> > +  else
> > +    rtx_op2 = expand_normal (op2);
> >  
> >    rtx target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
> >    create_output_operand (&ops[0], target, mode);
> 
> The above LGTM, it relies on vector_compare_rtx not swapping the arguments
> or performing some other comparison canonicalization, but at least right now
> that function doesn't seem to do that.
> 
> > --- a/gcc/optabs.cc
> > +++ b/gcc/optabs.cc
> > @@ -5119,13 +5119,43 @@ emit_conditional_move (rtx target, struct 
> > rtx_comparison comp,
> >       last = get_last_insn ();
> >       do_pending_stack_adjust ();
> >       machine_mode cmpmode = comp.mode;
> > +     rtx orig_op0 = XEXP (comparison, 0);
> > +     rtx orig_op1 = XEXP (comparison, 1);
> > +     rtx op2p = op2;
> > +     rtx op3p = op3;
> > +     /* If we are optimizing, force expensive constants into a register
> > +        but preserve an eventual equality with op2/op3.  */
> > +     if (CONSTANT_P (orig_op0) && optimize
> > +         && (rtx_cost (orig_op0, mode, COMPARE, 0,
> > +                       optimize_insn_for_speed_p ())
> > +             > COSTS_N_INSNS (1))
> > +         && can_create_pseudo_p ())
> > +       {
> > +         XEXP (comparison, 0) = force_reg (cmpmode, orig_op0);
> > +         if (rtx_equal_p (orig_op0, op2))
> > +           op2p = XEXP (comparison, 0);
> > +         if (rtx_equal_p (orig_op0, op3))
> > +           op3p = XEXP (comparison, 0);
> > +       }
> > +     if (CONSTANT_P (orig_op1) && optimize
> > +         && (rtx_cost (orig_op1, mode, COMPARE, 0,
> > +                       optimize_insn_for_speed_p ())
> > +             > COSTS_N_INSNS (1))
> > +         && can_create_pseudo_p ())
> > +       {
> > +         XEXP (comparison, 1) = force_reg (cmpmode, orig_op1);
> > +         if (rtx_equal_p (orig_op1, op2))
> > +           op2p = XEXP (comparison, 1);
> > +         if (rtx_equal_p (orig_op1, op3))
> > +           op3p = XEXP (comparison, 1);
> > +       }
> 
> I'm worried here, because prepare_cmp_insn before doing almost identical
> forcing to reg does
>   if (CONST_SCALAR_INT_P (y))
>     canonicalize_comparison (mode, &comparison, &y);
> which the above change will make not happen anymore (for the more expensive
> constants).

Hmm, yeah - that could happen.

> If we have a match between at least one of the comparison operands and
> op2/op3, I think having equivalency there is perhaps more important than
> the canonicalization, but it would be nice not to break it even if there
> is no match.  So, perhaps force_reg only if there is a match?
> force_reg (cmpmode, force_reg (cmpmode, x)) is equivalent to
> force_reg (cmpmode, x), so perhaps:
>           {
>             if (rtx_equal_p (orig_op0, op2))
>               op2p = XEXP (comparison, 0) = force_reg (cmpmode, orig_op0);
>             if (rtx_equal_p (orig_op0, op3))
>               op3p = XEXP (comparison, 0)
>                 = force_reg (cmpmode, XEXP (comparison, 0));
>           }
> and similarly for the other body?

I don't think we'll have op3 == op2 == orig_op0 because if
op2 == op3 the 

  /* If the two source operands are identical, that's just a move.  */

  if (rtx_equal_p (op2, op3))
    {
      if (!target)
        target = gen_reg_rtx (mode);

      emit_move_insn (target, op3);
      return target;

code should have triggered.  So we should know we invoke force_reg
only once for each comparison operand check?

So I'm going to test the following ontop of the patch.

Thanks,
Richard.

diff --git a/gcc/optabs.cc b/gcc/optabs.cc
index a9ba3267666..2ac4b2698b2 100644
--- a/gcc/optabs.cc
+++ b/gcc/optabs.cc
@@ -5131,11 +5131,10 @@ emit_conditional_move (rtx target, struct 
rtx_comparison comp,
                  > COSTS_N_INSNS (1))
              && can_create_pseudo_p ())
            {
-             XEXP (comparison, 0) = force_reg (cmpmode, orig_op0);
              if (rtx_equal_p (orig_op0, op2))
-               op2p = XEXP (comparison, 0);
+               op2p = XEXP (comparison, 0) = force_reg (cmpmode, 
orig_op0);
              if (rtx_equal_p (orig_op0, op3))
-               op3p = XEXP (comparison, 0);
+               op3p = XEXP (comparison, 0) = force_reg (cmpmode, 
orig_op0);
            }
          if (CONSTANT_P (orig_op1) && optimize
              && (rtx_cost (orig_op1, mode, COMPARE, 0,
@@ -5143,11 +5142,10 @@ emit_conditional_move (rtx target, struct 
rtx_comparison comp,
                  > COSTS_N_INSNS (1))
              && can_create_pseudo_p ())
            {
-             XEXP (comparison, 1) = force_reg (cmpmode, orig_op1);
              if (rtx_equal_p (orig_op1, op2))
-               op2p = XEXP (comparison, 1);
+               op2p = XEXP (comparison, 1) = force_reg (cmpmode, 
orig_op1);
              if (rtx_equal_p (orig_op1, op3))
-               op3p = XEXP (comparison, 1);
+               op3p = XEXP (comparison, 1) = force_reg (cmpmode, 
orig_op1);
            }
          prepare_cmp_insn (XEXP (comparison, 0), XEXP (comparison, 1),
                            GET_CODE (comparison), NULL_RTX, unsignedp,

Reply via email to