https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113196

--- Comment #2 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The trunk branch has been updated by Richard Sandiford <rsand...@gcc.gnu.org>:

https://gcc.gnu.org/g:74e3e839ab2d368413207455af2fdaaacc73842b

commit r14-7187-g74e3e839ab2d368413207455af2fdaaacc73842b
Author: Richard Sandiford <richard.sandif...@arm.com>
Date:   Fri Jan 12 12:38:01 2024 +0000

    aarch64: Rework uxtl->zip optimisation [PR113196]

    g:f26f92b534f9 implemented unsigned extensions using ZIPs rather than
    UXTL{,2}, since the former has a higher throughput than the latter on
    amny cores.  The optimisation worked by lowering directly to ZIP during
    expand, so that the zero input could be hoisted and shared.

    However, changing to ZIP means that zero extensions no longer benefit
    from some existing combine patterns.  The patch included new patterns
    for UADDW and USUBW, but the PR shows that other patterns were affected
    as well.

    This patch instead introduces the ZIPs during a pre-reload split
    and forcibly hoists the zero move to the outermost scope.  This has
    the disadvantage of executing the move even for a shrink-wrapped
    function, which I suppose could be a problem if it causes a kernel
    to trap and enable Advanced SIMD unnecessarily.  In other circumstances,
    an unused move shouldn't affect things much.

    Also, the RA should be able to rematerialise the move at an
    appropriate point if necessary, such as if there is an intervening
    call.

    In https://gcc.gnu.org/pipermail/gcc-patches/2024-January/641948.html
    I'd then tried to allow a zero to be recombined back into a solitary
    ZIP.  However, that relied on late-combine, which didn't make it into
    GCC 14.  This version instead restricts the split to cases where the
    UXTL executes more frequently as the entry block (which is where we
    plan to put the zero).

    Also, the original optimisation contained a big-endian correction
    that I don't think is needed/correct.  Even on big-endian targets,
    we want the ZIP to take the low half of an element from the input
    vector and the high half from the zero vector.  And the patterns
    map directly to the underlying Advanced SIMD instructions: the use
    of unspecs means that there's no need to adjust for the difference
    between GCC and Arm lane numbering.

    gcc/
            PR target/113196
            * config/aarch64/aarch64.h (machine_function::advsimd_zero_insn):
            New member variable.
            * config/aarch64/aarch64-protos.h (aarch64_split_simd_shift_p):
            Declare.
            * config/aarch64/iterators.md (Vnarrowq2): New mode attribute.
            * config/aarch64/aarch64-simd.md
            (vec_unpacku_hi_<mode>, vec_unpacks_hi_<mode>): Recombine into...
            (vec_unpack<su>_hi_<mode>): ...this.  Move the generation of
            zip2 for zero-extends to...
            (aarch64_simd_vec_unpack<su>_hi_<mode>): ...a split of this
            instruction.  Fix big-endian handling.
            (vec_unpacku_lo_<mode>, vec_unpacks_lo_<mode>): Recombine into...
            (vec_unpack<su>_lo_<mode>): ...this.  Move the generation of
            zip1 for zero-extends to...
            (<optab><Vnarrowq><mode>2): ...a split of this instruction.
            Fix big-endian handling.
            (*aarch64_zip1_uxtl): New pattern.
            (aarch64_usubw<mode>_lo_zip, aarch64_uaddw<mode>_lo_zip): Delete
            (aarch64_usubw<mode>_hi_zip, aarch64_uaddw<mode>_hi_zip): Likewise.
            * config/aarch64/aarch64.cc (aarch64_get_shareable_reg): New
function.
            (aarch64_gen_shareable_zero): Use it.
            (aarch64_split_simd_shift_p): New function.

    gcc/testsuite/
            PR target/113196
            * gcc.target/aarch64/pr113196.c: New test.
            * gcc.target/aarch64/simd/vmovl_high_1.c: Remove double include.
            Expect uxtl2 rather than zip2.
            * gcc.target/aarch64/vect_mixed_sizes_8.c: Expect zip1 rather
            than uxtl.
            * gcc.target/aarch64/vect_mixed_sizes_9.c: Likewise.
            * gcc.target/aarch64/vect_mixed_sizes_10.c: Likewise.

Reply via email to