https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98532

            Bug ID: 98532
           Summary: Use load/store pairs for 2-element vector in memory
                    permutes
           Product: gcc
           Version: unknown
            Status: UNCONFIRMED
          Keywords: missed-optimization
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: ktkachov at gcc dot gnu.org
  Target Milestone: ---
            Target: aarch64

I've seen these patterns while looking at some disassemblies but I believe it
can be reproduced in C with:
typedef long v2di __attribute__((vector_size (16)));

void
foo (v2di *a, v2di *b)
{
  v2di tmp = {(*a)[1], (*a)[0]};
  *b = tmp;
}

This, for aarch64 -O2 generates:
foo:
        ldr     d0, [x0, 8]
        ld1     {v0.d}[1], [x0]
        str     q0, [x1]
        ret

clang does:
foo:                                    // @foo
        ldr     q0, [x0]
        ext     v0.16b, v0.16b, v0.16b, #8
        str     q0, [x1]
        ret

I suspect we can do better in these cases with:
ldp x2, x3, [x0]
stp x3, x2, [x1]
or something similar.
In the combine phase we already try and fail to match:
Failed to match this instruction:
(set (reg:V2DI 97 [ tmp ])
    (vec_concat:V2DI (mem/j:DI (plus:DI (reg/v/f:DI 95 [ a ])
                (const_int 8 [0x8])) [1 BIT_FIELD_REF <*a_4(D), 64, 64>+0 S8
A64])
        (mem/j:DI (reg/v/f:DI 95 [ a ]) [1 BIT_FIELD_REF <*a_4(D), 64, 0>+0 S8
A128])))


so maybe we can solve this purely in the backend?

Reply via email to