Hi!

On a testcase which has 100000 consecutive debug insns sched2 spends
a lot of time calling prev_nonnote_nondebug_insn on each of the debug
insns, even when it is completely useless, because no target wants
to fuse a non-debug insn with some debug insn after it, it makes sense
only for two non-debug insns.

By returning early for those, we'll just walk the long set of them once when
we process some non-debug instruction after a long block of debug insns.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2018-01-26  Jakub Jelinek  <ja...@redhat.com>

        PR middle-end/84040
        * sched-deps.c (sched_macro_fuse_insns): Return immediately for
        debug insns.

--- gcc/sched-deps.c.jj 2018-01-03 10:19:56.301534141 +0100
+++ gcc/sched-deps.c    2018-01-26 16:21:01.922414579 +0100
@@ -2834,10 +2834,16 @@ static void
 sched_macro_fuse_insns (rtx_insn *insn)
 {
   rtx_insn *prev;
+  /* No target hook would return true for debug insn as any of the
+     hook operand, and with very large sequences of only debug insns
+     where on each we call sched_macro_fuse_insns it has quadratic
+     compile time complexity.  */
+  if (DEBUG_INSN_P (insn))
+    return;
   prev = prev_nonnote_nondebug_insn (insn);
   if (!prev)
     return;
- 
+
   if (any_condjump_p (insn))
     {
       unsigned int condreg1, condreg2;

        Jakub

Reply via email to