On Thu, Jun 2, 2022 at 9:21 AM Roger Sayle <ro...@nextmovesoftware.com> wrote: > > The simple test case below demonstrates an interesting register > allocation challenge facing x86_64, imposed by ABI requirements > on int128. > > __int128 foo(__int128 x, __int128 y) > { > return x+y; > } > > For which GCC currently generates the unusual sequence: > > movq %rsi, %rax > movq %rdi, %r8 > movq %rax, %rdi > movq %rdx, %rax > movq %rcx, %rdx > addq %r8, %rax > adcq %rdi, %rdx > ret > > The challenge is that the x86_64 ABI requires passing the first __int128, > x, in %rsi:%rdi (highpart in %rsi, lowpart in %rdi), where internally > GCC prefers TI mode (double word) integers to be register allocated as > %rdi:%rsi (highpart in %rdi, lowpart in %rsi).
Do you know if this is a hard limitation? I guess reg:TI 2 will cover hardreg 2 and 3 and the overlap is always implicit adjacent hardregs? I suspect that in other places we prefer the current hardreg ordering so altering it to make it match the __int128 register passing convention is not an option. Alternatively TImode ops should be split before RA and for register passing (concat:TI ...) could be allowed? Fixing up after the fact is of course possible but it looks awkward that there's no good way for the RA and the backend to communicate better here? > So after reload, we have > four mov instructions, two to move the double word to temporary registers > and then two to move them back. > > This patch adds a peephole2 to spot this register shuffling, and with > -Os generates a xchg instruction, to produce: > > xchgq %rsi, %rdi > movq %rdx, %rax > movq %rcx, %rdx > addq %rsi, %rax > adcq %rdi, %rdx > ret > > or when optimizing for speed, a three mov sequence, using just one of > the temporary registers, which ultimately results in the improved: > > movq %rdi, %r8 > movq %rdx, %rax > movq %rcx, %rdx > addq %r8, %rax > adcq %rsi, %rdx > ret > > I've a follow-up patch which improves things further, and with the > output in flux, I'd like to add the new testcase with part 2, once > we're back down to requiring only two movq instructions. > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap > and make -k check, both with and without --target_board=unix{-m32} with > no new failures. Ok for mainline? > > > 2022-06-02 Roger Sayle <ro...@nextmovesoftware.com> > > gcc/ChangeLog > * config/i386/i386.md (define_peephole2): Recognize double word > swap sequences, and replace them with more efficient idioms, > including using xchg when optimizing for size. > > > Thanks in advance, > Roger > -- >