hi, We are using hive (on mr) in a small hadoop cluster (13 node managers), sometimes user submits a complex hive query which can take nearly all containers from yarn, making following submitted application ran very slow. It may take several minutes for the following application to get some containers to run.
So my questions are: 1. can yarn / mr limit the max containers a mr job can requests 2. can yarn limit the max period a application can run, after which yarn will automatically kill it Any suggestion is helpful! ps: we are using fair scheduler and two queues, preemption is disabled: 1. urgent(weight 10) 2. normal(weight 1, most jobs will run here) Thanks --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
