On Sat, Jan 4, 2014 at 10:29 AM, Kirill Yukhin <kirill.yuk...@gmail.com> wrote:
> Guys,
> On 04 Jan 10:09, Jakub Jelinek wrote:
>> Note I haven't tested the patch at all, perhaps some testcases wouldn't
>> match their regexps anymore (but probably the
>> gcc.target/i386/avx512f-vmovdqu32-1.c change could go away).
>>
>> --- gcc/config/i386/sse.md.jj 2014-01-04 09:48:48.000000000 +0100
>> +++ gcc/config/i386/sse.md    2014-01-04 10:03:30.256458372 +0100
>> @@ -743,9 +743,16 @@ (define_insn "*mov<mode>_internal"
>>       case MODE_XI:
>>         if (misaligned_operand (operands[0], <MODE>mode)
>>             || misaligned_operand (operands[1], <MODE>mode))
>> -         return "vmovdqu64\t{%1, %0|%0, %1}";
>> -       else
>> +         {
>> +           if (<MODE>mode == V8DImode)
>> +             return "vmovdqu64\t{%1, %0|%0, %1}";
>> +           else
>> +             return "vmovdqu32\t{%1, %0|%0, %1}";
>> +         }
>> +       else if (<MODE>mode == V8DImode)
>>           return "vmovdqa64\t{%1, %0|%0, %1}";
>> +       else
>> +         return "vmovdqa32\t{%1, %0|%0, %1}";
>>
>>       default:
>>         gcc_unreachable ();
> I think this hunk will increase generated code beauty
> and have no perf impact.

Yes, let's have consistent move insn mnemonics in the source.

Uros.

Reply via email to