https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #49 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Richard Biener <rgue...@gcc.gnu.org>:

https://gcc.gnu.org/g:cc141b56b367b3d81c1b590e22ae174f1e013009

commit r15-3854-gcc141b56b367b3d81c1b590e22ae174f1e013009
Author: Richard Biener <rguent...@suse.de>
Date:   Tue Sep 24 14:10:13 2024 +0200

    rtl-optimization/114855 - slow add_store_equivs in IRA

    For the testcase in PR114855 at -O1 add_store_equivs shows up as the
    main sink for bitmap_set_bit because it uses a bitmap to mark all
    seen insns by UID to make sure the forward walk in memref_used_between_p
    will find the insn in question.  Given we do have a CFG here the
    functions operation is questionable, given memref_used_between_p
    together with the walk of all insns is obviously quadratic in the
    worst case that whole thing should be re-done ... but, for the
    testcase, using a sbitmap of size get_max_uid () + 1 gets
    bitmap_set_bit off the profile and improves IRA time from 15.58s (8%)
    to 3.46s (2%).

    Now, given above quadraticness I wonder whether we should instead
    gate add_store_equivs on optimize > 1 or flag_expensive_optimizations.

            PR rtl-optimization/114855
            * ira.cc (add_store_equivs): Use sbitmap for tracking
            visited insns.

Reply via email to