------- Comment #101 from dberlin at gcc dot gnu dot org  2009-02-21 04:13 
-------
Subject: Re:  [4.3/4.4 Regression] Inordinate 
        compile times on large routines

PRE already gives up on this testcase, at least on my computer, and
takes no memory.
All of the memory here is being eaten by IRA and DF.
The actual time sink is SCCVN's DFS, which builds a large SCC then
counts it's size and gives up (which in turn causes PRE to give up).

It's not clear you can really modify this to give up earlier than it
does (since you don't know the size of the SCC until it's already done
all the work anyway) without a ton of work.

I'm replacing this algorithm with a non-SCC based one in 4.5.

On Fri, Feb 20, 2009 at 2:52 PM, lucier at math dot purdue dot edu
<gcc-bugzi...@gcc.gnu.org> wrote:
>
>
> ------- Comment #98 from lucier at math dot purdue dot edu  2009-02-20 19:52 
> -------
> Thank you, that indeed "fixes" the LICM problem.
>
> Based on some comments for this PR and for PR 39157 I thought that a similar
> patch might apply to PRE.
..

>I think the -O1 and -O2 limits for LICM are quite reasonable; would it be
>possible to limit PRE similarly so that one could compile compiler.i with -O2
>in a reasonable amount of memory?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26854

Reply via email to