On 2017.04.09 at 21:25 +0300, Alexander Monakov wrote: > On Sun, 9 Apr 2017, Markus Trippelsdorf wrote: > > > The minimum size heuristic for the garbage collector's heap, before it > > starts collecting, was last updated over ten years ago. > > It currently has a hard upper limit of 128MB. > > This is too low for current machines where 8GB of RAM is normal. > > So, it seems to me, a new upper bound of 1GB would be appropriate. > > While amount of available RAM has grown, so has the number of available CPU > cores (counteracting RAM growth for parallel builds). Building under a > virtualized environment with less-than-host RAM got also more common I think. > > Bumping it all the way up to 1GB seems excessive, how did you arrive at that > figure? E.g. my recollection from watching a Firefox build is that most of > compiler instances need under 0.5GB (RSS).
1GB was just a number I've picked to get the discussion going. And you are right, 512MB looks like a good compromise. > > Compile times of large C++ projects improve by over 10% due to this > > change. > > Can you explain a bit more, what projects you've tested?.. 10+% looks > surprisingly high to me. I've checked LLVM build times on ppc64le and X86_64. But you can observe the effect also with a single big C++ file like tramp3d-v4.cpp. On my old machine: --param ggc-min-heapsize=131072 : 26.97 secs / 711MB peak memory (current default) --param ggc-min-heapsize=393216 : 26.04 secs / 886MB peak memory --param ggc-min-heapsize=524288 : 25.37 secs / 983MB peak memory --param ggc-min-heapsize=1000000 : 25.36 secs / 990MB peak memory > > What do you think? > > I wonder if it's possible to reap most of the compile time benefit with a bit > more modest gc threshold increase? 512MB looks like the sweet spot. And of course one is basically trading memory usage for compile time performance. -- Markus