On Wed, Oct 16, 2013 at 6:06 PM, Kirill Yukhin <kirill.yuk...@gmail.com> wrote:

> It seems that gang of AVX* patterns were copy and
> pasted from SSE, however as far as they are NDD,
> we may remove corresponding expands which sort operands.
>
> ChangeLog:
>         * config/i386/sse.md (vec_widen_umult_even_v8si): Remove expand,
>         make insn visible, remove redundant check.
>         (vec_widen_smult_even_v8si): Ditto.
>         (avx2_pmaddwd): Ditto.
>         (avx2_eq<mode>3): Ditto.
>         (avx512f_eq<mode>3): Ditto.
>

> -(define_insn "*vec_widen_smult_even_v8si"
> +(define_insn "vec_widen_smult_even_v8si"
>    [(set (match_operand:V4DI 0 "register_operand" "=x")
>         (mult:V4DI
>           (sign_extend:V4DI
>             (vec_select:V4SI
> -             (match_operand:V8SI 1 "nonimmediate_operand" "x")
> +             (match_operand:V8SI 1 "nonimmediate_operand" "%x")
>               (parallel [(const_int 0) (const_int 2)
>                          (const_int 4) (const_int 6)])))
>           (sign_extend:V4DI
> @@ -6166,7 +6134,7 @@
>               (match_operand:V8SI 2 "nonimmediate_operand" "xm")
>               (parallel [(const_int 0) (const_int 2)
>                          (const_int 4) (const_int 6)])))))]
> -  "TARGET_AVX2 && ix86_binary_operator_ok (MULT, V8SImode, operands)"
> +  "TARGET_AVX2"
>    "vpmuldq\t{%2, %1, %0|%0, %1, %2}"
>    [(set_attr "isa" "avx")

Please also remove the above set_attr, it is not needed when the insn
is constrainted with TARGET_AVX2.

OK with this addition.

Thanks,
Uros.

Reply via email to