Hi, all,

recently I’ve noticed strange behaviour of YARN Fair Scheduler: 2 jobs (i.e. 
two simultaneously started oozie launchers) started in a queue with a small 
weight, and was not able to launch spark jobs while there were plenty resources 
in other queues.

In details:
- hadoop(2.6, cdh 5.12) yarn with fair scheduler
- the queue(say, small_queue) with small weight starts 2 oozie launcher jobs
- oozie launcher jobs occupies all small_queue capacity (even exceed it by 1 
core), and both ready to submit spark job (in the same queue = small_queue)
- there is about 1/4 of free cluster resources in other queues (much more then 
spark jobs require)

Expected behaviour: free resources of other queues will be given too oozie 
launchers (from small_queue) to start their spark jobs
Actual behaviour: spark jobs were never started

Does anybody have an idea what prevented spark jobs from launch?

Reply via email to