https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316
--- Comment #4 from David <david at pgmasters dot net> --- Hmm, yeah, increasing the memory a bit (4GB -> 5GB) leads to a successful compile. I guess it is expected that newer versions use more memory -- more features, etc. The interesting thing is that I was combining test units in a bid to save time. Each unit has to start a container, compile/link, and then run the test with gcov. What actually happened was that the one combined test on gcc 7 (once I got the memory bumped up) actually ran about 2.5 times slower than the three separate tests. That's compile/link + execution. Unfortunately the test scheduler does not allow me to separate those times but if I run the test manually it completes in < 1 second. That means the other 41 seconds are compile time. However, older compilers <= 4 showed no regression when dealing with the combined test file, which was overall more efficient than separate test runs. That advantage went away with the gcc 5: even though it is more memory efficient than gcc7 it is just about as slow. I had thought the gcc 7 compile/tests were slower because that's where we do our coverage testing. After disabling coverage I see little change in speed -- gcc7 is about six times slower than older compilers in my tests, aside from using a lot more memory. It could be that there's a variable that I'm missing here, but in general, is this expected behavior from newer versions of gcc?