Hi wuchang, If you are using Hadoop 2.7+, you can use the following parameters to limit the number of simultaneously running map/reduce tasks per MapReduce application:
* mapreduce.job.running.map.limit (default: 0, for no limit) * mapreduce.job.running.reduce.limit (default: 0, for no limit) Regards, Akira On 2017/06/26 11:24, wuchang wrote:
For a map reduce task , I have restrict the memory of mapper and reducers by mapreduce.map.memory.mb=1G mapreduce.reduce.memory.mb=4G But this parameter just restrict the memory for each mapper or reducer task ,instead of the number of mappers or reducers which can be launched in parallel.when this task launched, it seems that it consumed about 50G memory , which is not what I want. I find that there are many containers launched in parallel。 What I want is to restrict the memory or cpu resource it can consume every moment, for example, the maximum memory of this application can use every moment is 10G.What can I do?
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
