On Wed, Oct 16, 2013 at 7:01 PM, Richard Henderson <r...@redhat.com> wrote: > On 10/16/2013 09:47 AM, Uros Bizjak wrote: >> On Wed, Oct 16, 2013 at 6:06 PM, Kirill Yukhin <kirill.yuk...@gmail.com> >> wrote: >> >>> It seems that gang of AVX* patterns were copy and >>> pasted from SSE, however as far as they are NDD, >>> we may remove corresponding expands which sort operands. >> >> OTOH, I have some second thoughts on removing AVX2 expanders. >> >> Please consider the situation, where we have *both* operands in >> memory, and the insn is inside the loop. When reload comes around, it >> will fixup one of the operands with a load from memory. However, >> having insn in the loop, I suspect the load won't be moved out of >> loop. >> >> So, I guess even AVX/AVX2 insn patterns should call >> ix86_fixup_binary_operands_*, and this fixup function should be >> improved to load one of the operands into register, in case both >> operands are in memory. >> >> This also means, that you still need expanders for AVX512 commutative >> multiplies. > > Fair enough.
I have checked ix86_fixup_binary_operands and ix86_binary_operator_ok, and they should be OK for destructive (SSE2) and non-destructive (AVX) commutative instructions. Vector instructions have always destination in a register, so most of fixups and checks do not apply at all. We can probably use: { if (MEM_P (operands[1]) && MEM_P (operands[2])) operands[1] = force_reg (<MODE>mode, operands[1]); } in expanders and !(MEM_P (operands[0]) && MEM_P (operands[1])) in insn constraints for most of SSE and AVX commutative insns. Uros.