Paul Smith <psm...@gnu.org> writes:

> But I agree with, and others have requested that, parallelism being
> limited in some way by available _memory_ and not just available CPU:
> this seems very reasonable.  The fly in the ointment is that, even
> moreso than CPU, memory is not allocated up-front and by the time
> memory pressure gets significant it's very possible make has already
> spawned too many parallel jobs.  Also, detecting amount of memory in
> use vs. free is harder to do, even among different POSIX systems.
>
> Nevertheless, it would still be a useful feature IMO.

That sounds reasonable. But since we do not know how much memory will be
consumed by the parallel jobs, any limit will be a heuristic. There may
be some cases, like my LLVM example, which requires lots of memory to
link, where the default '-j' will need to be changed by the user. This
is fine with me though, as long as the heuristic is good enough for most
situations.

Regarding the lack of portable methods for getting the number of CPUs
and free/used memory, we are fully in agreement. It is a pain.

You may find lib/nproc.[ch] and lib/physmem.[ch] in Gnulib useful.

Collin

Reply via email to