https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90851
Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Target| |x86_64-*-*, i?86-*-* Status|RESOLVED |NEW Keywords| |compile-time-hog, | |memory-hog Last reconfirmed| |2019-06-12 Component|c++ |target CC| |hjl at gcc dot gnu.org, | |uros at gcc dot gnu.org Resolution|DUPLICATE |--- Ever confirmed|0 |1 --- Comment #2 from Richard Biener <rguenth at gcc dot gnu.org> --- Confirmed. It's the STV pass blowing memory requirements up from about 1.7GB to 10GB (and more, just stopped its execution) during df_analyze. 1588 df_chain_add_problem (DF_DU_CHAIN | DF_UD_CHAIN); 1589 df_md_add_problem (); 1590 df_analyze (); probably the MD_ADD problem which looks quadratic in size. DF is a memory hog (gdb) p max_reg_num () $24 = 200096 (gdb) p cfun->cfg->x_n_basic_blocks $25 = 300009 (gdb) p cfun->cfg->x_basic_block_info.m_vecpfx $27 = {m_alloc = 524287, m_using_auto_storage = 0, m_num = 481318} The pass needs to limit itself by switching off with some heuristics for max_reg_num () * n_basic_blocks. LRA uses nregs >= (1 << 26) / nbbs which would trigger here as well. Alternatively avoid DF. It looks like the candidate compute can happen before even setting up DF, avoiding all the work if there are none. The candidates could also provide a mean to compress bitmaps (only look at interesting regs) - but DF doesn't provide this capability. Probably DF isn't even needed (for the whole function) and the whole pass could be written to work with a single RPO walk over the whole function doing analysis.