Debugging memory usage

2013-06-23 Thread Bob Bell
I have a fairly involved script that handles some build management 
tasks, which can run for as long as several hours.  I've recently seen 
it fail primarily due to failures to fork due to insufficient memory.  
I took a look and saw the bash process consuming as much as 3+ GB of 
memory.  I'm not doing anything where I'd expect to be consuming that 
much memory.

Does anyone have suggestions on ways to debug the use of memory, so 
I can identify and hopefully eliminate the offending code?  Due to the 
complexity and lengthy runtime of the script, it's unlike I'll be able 
to boil it down to a trivial case.

Thanks,
Bob



Re: Debugging memory usage

2013-06-24 Thread Bob Bell
On Mon, Jun 24, 2013 at 11:45:19AM -0700, John Reiser wrote:
> > I took a look and saw the bash process consuming as much as 3+ GB of 
> > memory.  I'm not doing anything where I'd expect to be consuming that 
> > much memory.
> 
> As a workaround, try using "ulimit -v" to restrict the virtual memory
> space of the shell itself.  (For invoking some child processes, it may
> be necessary to use an intermediate shell which raises the limit before
> exec-ing the child.)  It is not uncommon for a process (not just bash)
> to allocate until refused, and only then think about free()ing or
> collecting garbage.

Thanks, I'll look into that.

I did narrow it down to the case of some new code that spawns a couple
of tasks in parallel (i.e., in background), and then polls until they
are done, and then "wait"s on them to collect the exit status.  I'm not
trying to collect the output, though it should be redirected to a log
file.  It anyone knows anything I should really look out for, I'll
certainly listen to suggested.  I might try to see if I can isolate the
problem to my "spawn-and-wait"-related code.

-- Bob