I just swapped a machine to death by starting 1 jobs per CPU on a 48
core machine. The problem was that each job took more than 1/48th of
the memory.
That got me thinking: Would it make sense to have a setting in GNU
Parallel that automatically run 'ulimit ' with the relevant amount of
memory, so
> I just swapped a machine to death by starting 1 jobs per CPU on a 48
> core machine. The problem was that each job took more than 1/48th of
> the memory.
Definitely been there.
> Would it make sense to have a setting in GNU
> Parallel that automatically run 'ulimit ' with the relevant amount of
On Thu, Aug 9, 2012 at 3:39 PM, Hans Schou wrote:
> 2012/8/9 Ole Tange
>>
>> I just swapped a machine to death by starting 1 jobs per CPU on a 48
>> core machine. The problem was that each job took more than 1/48th of
>> the memory.
>
> Another approach could be to avoid starting jobs when the sy
On Thu, Aug 9, 2012 at 3:24 PM, Rhys Ulerich wrote:
>> Would it make sense to have a setting in GNU
>> Parallel that automatically run 'ulimit ' with the relevant amount of
>> memory, so if you ask for X jobs to be run on a given server, then
>> each job is only allowed 1/X'th of the memory on th
2012/8/9 Ole Tange
> I just swapped a machine to death by starting 1 jobs per CPU on a 48
> core machine. The problem was that each job took more than 1/48th of
> the memory.
>
Another approach could be to avoid starting jobs when the system is
swapping.
It take a very little resource to get th