Greetings! As all may know, the 2.6.13 prerelease attempts to make efficient use of large memories available at runtime. One quirk arises in trying to estimate the amount of memory that might be needed should the user desire to compile files late in a job, which traditionally invokes gcc via a call to system(), which in turn calls fork(). Under Linux, this is a copy on write implementation, so the memory overhead is just that needed to copy kernel page tables. Unfortunately, I have run into circumstances when the kernel runs out of the memory required to perform a fork() even though memory operations in the running process are within bounds. These circumstances appear erratically under isolated combinations of ram and swap and ultimately stem from the kernel's oom implementation.
I've put in a temporary heuristic to leave 15% of apparently available memory free, but this is wasteful. I am considering forking a minimal process at startup just to receive and process gcc invocation requests, as this is by far the most common occurrence of fork() in gcl (but not the only ones -- see #'si::socket and #'si::run-process). Anyone have any better ideas? -- Camm Maguire [email protected] ========================================================================== "The earth is but one country, and mankind its citizens." -- Baha'u'llah _______________________________________________ Gcl-devel mailing list [email protected] https://lists.gnu.org/mailman/listinfo/gcl-devel
