http://gcc.gnu.org/bugzilla/show_bug.cgi?id=50636
--- Comment #1 from Jan Hubicka <hubicka at ucw dot cz> 2011-10-06 19:20:45 UTC --- > When doing a very large LTO build I fail with "out of virtual memory" > > Some investigation showed the problem was not actually running out of > memory, but gcc excessively fragmenting its memory map. The Linux kernel > has a default limit of 64k mappings per process and the fragmentation > exceeded that. This lead to gc mmap allocations failing and other problems. > > A workaround is to increase /proc/sys/vm/max_map_count > > Looking at /proc/$(pidof lto1)/maps I see there are lots of 1-3 page holes > between other anonymousmemory. I think that's caused by ggc-pages free_pages() > function freeing too early and in too small chunks > (and perhaps LTO garbage collecting too much?) In gcc-2.95 times ggc-page was probably not written with 8GB of memory use in mind :) Perhaps ggc-page should simply increase the chunks as memory grows? (i.e. release to system only when 20% of memory is unused & it exceeds some minimal value) Honza