> Pengxuan Zheng <quic_pzh...@quicinc.com> writes:
> > diff --git a/gcc/config/aarch64/aarch64.cc
> > b/gcc/config/aarch64/aarch64.cc index 15f08cebeb1..98ce85dfdae 100644
> > --- a/gcc/config/aarch64/aarch64.cc
> > +++ b/gcc/config/aarch64/aarch64.cc
> > @@ -23621,6 +23621,36 @@ aarch64_simd_valid_and_imm (rtx op)
> >    return aarch64_simd_valid_imm (op, NULL, AARCH64_CHECK_AND);  }
> >
> > +/* Return true if OP is a valid SIMD and immediate which allows the
> > +and be
> 
> s/and be/and to be/
> 
> > +   optimized as fmov.  If ELT_SIZE is nonnull, it represents the size
of the
> > +   register for fmov.  */
> 
> Maybe rename this to ELT_BITSIZE (see below), and say:
> 
>   If ELT_BITSIZE is nonnull, use it to return the number of bits to move.
> 
> > +bool
> > +aarch64_simd_valid_and_imm_fmov (rtx op, unsigned int *elt_size) {
> > +  machine_mode mode = GET_MODE (op);
> > +  gcc_assert (!aarch64_sve_mode_p (mode));
> > +
> > +  auto_vec<target_unit, 16> buffer;
> > +  unsigned int n_bytes = GET_MODE_SIZE (mode).to_constant ();
> > + buffer.reserve (n_bytes);
> > +
> > +  bool ok = native_encode_rtx (mode, op, buffer, 0, n_bytes);
> > + gcc_assert (ok);
> > +
> > +  auto mask = native_decode_int (buffer, 0, n_bytes, n_bytes *
> > +BITS_PER_UNIT);
> > +  int set_bit = wi::exact_log2 (mask + 1);
> > +  if ((set_bit == 16 && TARGET_SIMD_F16INST)
> > +      || set_bit == 32
> > +      || set_bit == 64)
> > +    {
> > +      if (elt_size)
> > +   *elt_size = set_bit / BITS_PER_UNIT;
> 
> I didn't notice last time that the only consumer multiplies by
BITS_PER_UNIT
> again, so how about making this:
> 
>   *elt_bitsize = set_bit;
> 
> and removing the later multiplication.
> 
> Please leave 24 hours for other to comment, but otherwise the patch is ok
> with those changes, thanks.
> 
> Richard

Thanks, Richard! I've updated the patch accordingly and pushed it as
r16-703-g0417a630811404.

Pengxuan

Reply via email to