On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein <[email protected]> wrote: >It seems like the new compiler likes to get up to ~200+MB resident when >building some basic things in our tree.
The killer I found was the ctfmerge(1) on the kernel - which exceeds ~400MB on i386. Under low RAM, that fails _without_ reporting any errors back to make(1), resulting in a corrupt new kernel (it booted but had virtually no devices so it couldn't find root). >Doesn't our make(1) have some stuff to mitigate this? I would expect it >to be a bit smarter about detecting the number of swaps/pages/faults of >its children and taking into account the machine's total ram before >forking off new processes. The difficulty I see is that the make process can't tell anything about the memory requirements of the pipeline it is about to spawn. As a rule of thumb, C++ needs more memory than C but that depends on what is being compiled - I have a machine-generated C program that makes gcc bloat to ~12GB. >Any ideas? I mean a really simple algorithm could be devised that would >be better than what we appear to have (which is nothing). If you can afford to waste CPU, one approach would be for make(1) to setrlimit(2) child processes and if the child dies, it retries that child by itself - but that will generate unnecessary retries. Another, more involved, approach would be for the scheduler to manage groups of processes - if a group of processes is causing memory pressure as a whole then the scheduler just stops scheduling some of them until the pressure reduces (effectively swap them out). (Yes, that's vague and lots of hand-waving that might not be realisable). -- Peter Jeremy
pgpOxQEkEC3S2.pgp
Description: PGP signature

