https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98856

--- Comment #7 from Richard Biener <rguenth at gcc dot gnu.org> ---
OK, and the spill is likely because we expand as

(insn 7 6 0 (set (reg:TI 84 [ _9 ])
        (mem:TI (reg/v/f:DI 93 [ in ]) [0 MEM <__int128 unsigned> [(char *
{ref-all})in_8(D)]+0 S16 A8])) -1
     (nil))

(insn 8 7 9 (parallel [
            (set (reg:DI 95)
                (lshiftrt:DI (subreg:DI (reg:TI 84 [ _9 ]) 8)
                    (const_int 63 [0x3f])))
            (clobber (reg:CC 17 flags))
        ]) "t.c":7:26 -1
     (nil))

^^^ (subreg:DI (reg:TI 84 [ _9 ]) 8)

...

(insn 12 11 13 (set (reg:V2DI 98 [ vect__5.3 ])
        (ashift:V2DI (subreg:V2DI (reg:TI 84 [ _9 ]) 0)
            (const_int 1 [0x1]))) "t.c":9:16 -1
     (nil))

^^^ (subreg:V2DI (reg:TI 84 [ _9 ]) 0)

LRA then does

         Choosing alt 4 in insn 7:  (0) v  (1) vm {*movti_internal}
      Creating newreg=103 from oldreg=84, assigning class ALL_SSE_REGS to r103
    7: r103:TI=[r101:DI]
      REG_DEAD r101:DI
    Inserting insn reload after:
   20: r84:TI=r103:TI

         Choosing alt 0 in insn 8:  (0) =rm  (1) 0  (2) cJ {*lshrdi3_1}
      Creating newreg=104 from oldreg=95, assigning class GENERAL_REGS to r104

    Inserting insn reload before:
   21: r104:DI=r84:TI#8

but somehow this means the reload 20 is used for the reload 21 instead
of avoiding the reload 20 and doing a movhlps / movq combo?  (I guess
there's no high part xmm extract to gpr)

As said the assembly is a bit weird:

poly_double_le2:
.LFB0:
        .cfi_startproc
        vmovdqu (%rsi), %xmm2
        vmovdqa %xmm2, -24(%rsp)
        movq    -16(%rsp), %rax

ok, well ...

        vmovdqa -24(%rsp), %xmm3

???

        shrq    $63, %rax
        imulq   $135, %rax, %rax
        vmovq   %rax, %xmm0
        movq    -24(%rsp), %rax

???  movq %xmm2/3, %rax

        vpsllq  $1, %xmm3, %xmm1
        shrq    $63, %rax
        vpinsrq $1, %rax, %xmm0, %xmm0
        vpxor   %xmm1, %xmm0, %xmm0
        vmovdqu %xmm0, (%rdi)

note even with -march=core-avx2 (and thus inter-unit moves not pessimized) we
get

poly_double_le2:
.LFB0:
        .cfi_startproc
        vmovdqu (%rsi), %xmm2
        vmovdqa %xmm2, -24(%rsp)
        movq    -16(%rsp), %rax
        vmovdqa -24(%rsp), %xmm3
        shrq    $63, %rax
        vpsllq  $1, %xmm3, %xmm1
        imulq   $135, %rax, %rax
        vmovq   %rax, %xmm0
        movq    -24(%rsp), %rax
        shrq    $63, %rax
        vpinsrq $1, %rax, %xmm0, %xmm0
        vpxor   %xmm1, %xmm0, %xmm0
        vmovdqu %xmm0, (%rdi)

with

.L56:
        .cfi_restore_state
        vmovdqu (%rsi), %xmm4
        movq    8(%rsi), %rdx
        shrq    $63, %rdx
        imulq   $135, %rdx, %rdi
        movq    8(%rsi), %rdx
        vmovq   %rdi, %xmm0
        vpsllq  $1, %xmm4, %xmm1
        shrq    $63, %rdx
        vpinsrq $1, %rdx, %xmm0, %xmm0
        vpxor   %xmm1, %xmm0, %xmm0
        vmovdqu %xmm0, (%rax)
        jmp     .L53

we arrive at

ES-128/XTS 672043 key schedule/sec; 0.00 ms/op 4978.00 cycles/op (2 ops in 0.00
ms)
AES-128/XTS encrypt buffer size 1024 bytes: 843.310 MiB/sec 4.18 cycles/byte
(421.66 MiB in 500.00 ms)
AES-128/XTS decrypt buffer size 1024 bytes: 847.215 MiB/sec 4.16 cycles/byte
(421.66 MiB in 497.70 ms)

a variant using movhlps isn't any faster than spilling unfortunately :/
I guess re-materializing from a load is too much to be asked from LRA.

On the vectorizer side costing is 52 scalar vs. 40 vector (as usual the
vectorized store alone leads to a big boost).

Reply via email to