https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103393

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |rearnsha at gcc dot gnu.org

--- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to H.J. Lu from comment #2)
> (In reply to Richard Biener from comment #1)
> > It isn't the vectorizer but memmove inline expansion.  I'm not sure it's
> > really a bug, but there isn't a way to disable %ymm use besides disabling
> > AVX entirely.
> > HJ?
> 
> YMM move is generated by loop distribution which doesn't check
> TARGET_PREFER_AVX128.

I think it's generated by gimple_fold_builtin_memory_op which since Richards
changes accepts bigger now, up to MOVE_MAX * MOVE_RATIO and that ends up
picking an integer mode via

              scalar_int_mode mode;
              if (int_mode_for_size (ilen * 8, 0).exists (&mode)
                  && GET_MODE_SIZE (mode) * BITS_PER_UNIT == ilen * 8
                  && have_insn_for (SET, mode)
                  /* If the destination pointer is not aligned we must be able
                     to emit an unaligned store.  */
                  && (dest_align >= GET_MODE_ALIGNMENT (mode)
                      || !targetm.slow_unaligned_access (mode, dest_align)
                      || (optab_handler (movmisalign_optab, mode)
                          != CODE_FOR_nothing)))

not sure if there's another way to validate things.

Reply via email to