https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110279

--- Comment #2 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Di Zhao <dz...@gcc.gnu.org>:

https://gcc.gnu.org/g:746344dd53807d840c29f52adba10d0ab093bd3d

commit r14-5779-g746344dd53807d840c29f52adba10d0ab093bd3d
Author: Di Zhao <diz...@os.amperecomputing.com>
Date:   Thu Nov 9 15:06:37 2023 +0800

    swap ops in reassoc to reduce cross backedge FMA

    Previously for ops.length >= 3, when FMA is present, we don't
    rank the operands so that more FMAs can be preserved. But this
    brings more FMAs with loop dependency, which lead to worse
    performance on some targets.

    Rank the oprands (set width=2) when:
    1. avoid_fma_max_bits is set.
    2. And loop dependent FMA sequence is found.

    In this way, we don't have to discard all the FMA candidates
    in the bad shaped sequence in widening_mul, instead we can keep
    fewer FMAs without loop dependency.

    With this patch, there's about 2% improvement in 510.parest_r
    1-copy run on ampere1 (with "-Ofast -mcpu=ampere1 -flto
    --param avoid-fma-max-bits=512").

    PR tree-optimization/110279

    gcc/ChangeLog:

            * tree-ssa-reassoc.cc (get_reassociation_width): check
            for loop dependent FMAs.
            (reassociate_bb): For 3 ops, refine the condition to call
            swap_ops_for_binary_stmt.

    gcc/testsuite/ChangeLog:

            * gcc.dg/pr110279-1.c: New test.

Reply via email to