https://gcc.gnu.org/g:cc141b56b367b3d81c1b590e22ae174f1e013009

commit r15-3854-gcc141b56b367b3d81c1b590e22ae174f1e013009
Author: Richard Biener <rguent...@suse.de>
Date:   Tue Sep 24 14:10:13 2024 +0200

    rtl-optimization/114855 - slow add_store_equivs in IRA
    
    For the testcase in PR114855 at -O1 add_store_equivs shows up as the
    main sink for bitmap_set_bit because it uses a bitmap to mark all
    seen insns by UID to make sure the forward walk in memref_used_between_p
    will find the insn in question.  Given we do have a CFG here the
    functions operation is questionable, given memref_used_between_p
    together with the walk of all insns is obviously quadratic in the
    worst case that whole thing should be re-done ... but, for the
    testcase, using a sbitmap of size get_max_uid () + 1 gets
    bitmap_set_bit off the profile and improves IRA time from 15.58s (8%)
    to 3.46s (2%).
    
    Now, given above quadraticness I wonder whether we should instead
    gate add_store_equivs on optimize > 1 or flag_expensive_optimizations.
    
            PR rtl-optimization/114855
            * ira.cc (add_store_equivs): Use sbitmap for tracking
            visited insns.

Diff:
---
 gcc/ira.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gcc/ira.cc b/gcc/ira.cc
index 9950ccac70bb..5231f63398e1 100644
--- a/gcc/ira.cc
+++ b/gcc/ira.cc
@@ -3838,7 +3838,8 @@ update_equiv_regs (void)
 static void
 add_store_equivs (void)
 {
-  auto_bitmap seen_insns;
+  auto_sbitmap seen_insns (get_max_uid () + 1);
+  bitmap_clear (seen_insns);
 
   for (rtx_insn *insn = get_insns (); insn; insn = NEXT_INSN (insn))
     {

Reply via email to