Hi Greenhorn,

I think there is a misunderstanding here. What I mean by the previous reply is 
to say we could set queue’s max-cap to be 100%, that’s acceptable. But there is 
never has any “general” recommendation for this parameter. 100% is just one of 
the acceptable values, no “special” standing here. Still, take the example we 
have at last point,

We can set a1’s max-cap to 100% and a2’smax-cap to 100%. Which this setting 
means is a1 and a2 can “at most” get full resource from its parent queue (which 
is Queue a) when there is idle resource, but the actual situation may not 
“allow” both of them to have happened at same time. Think this scenario below,

The cluster is clear and all resource is available, we set max-cap of Queue a 
to be 50% and max-cap of queue b to be 100%, max-cap of a1 to be 100% and 
max-cap of a2 to be 100%.
Then we submit app1 into a1, if there is no other limit, what will happen is a 
will get 50% of the whole cluster resource, and app1 will utilize 100% of a1’s 
resource which is 1 * 50% * 100% = 50% of the whole cluster resource. This is 
what 100% of max-cap for a1 has the effect here. Then we submit another app2 
into a2, now app1 in a1 may not able to hold all the 100% resource from a if 
pre-emption allows here.

But no matter what the condition is and what other parameters been set, queue a 
will never use capacity beyond 50% of its parent queue even though we get 100 
apps submit into sub-queues under queue a and another 50% resource is idle in 
queue b with no use. Because max-cap of 50% in queue a “hard limit” the upper 
bound of how many resources queue a is allowed to use.

This doc may also helps understand how these parameters can be set in YARN.
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html

Thanks
Zian


On May 21, 2018, at 4:11 PM, Greenhorn Techie 
<[email protected]<mailto:[email protected]>> wrote:

Thanks Zian. I am a bit confused now.

When I posted the question on Hortonworks community, I had a different opinion 
from another community member (I presume Hortonworks employee)

https://community.hortonworks.com/questions/192228/capacity-scheduler-maximum-capacity.html?childToView=192230#answer-192230

Regarding your last point on having an explicit maximum value for each queue, I 
still don't understand why this needs to be specified if the “general" 
recommendation is to have it at 100%, but change it only if the use case is 
different. Given your earlier answer, my understanding is that setting this to 
100% for all queues should ideally cause no harm, given the other set of 
parameters that come into play when multiple queues are involved.

Please correct me if I am wrong.

Thanks


On 22 May 2018 at 00:03:04, Zian Chen 
([email protected]<mailto:[email protected]>) wrote:

Hi Greenhorn,

Actually is the latter. When specifying the percentage, it means the upper 
bound of the percentage of the resource this queue can gather from its 
immediate parent queue. For example the queue hierarchy like below,

root
/  \
      a    b
     / \
   a1 a2

If we set max-cap of a1 to 100% it means a1 can at most gets 100% of the 
resource from its immediate parent a, not from root.

More importantly, there are many factors to be used to limit/ control how many 
resources a queue can get inside a cluster, like guaranteed resource, 
user-limit, node labels, etc. Sometimes applications in a queue can get more 
resource beyond its guaranteed resource when the cluster has idle resource 
available to support elasticity. But max-cap is always a hard limit for at most 
how many resources a queue can get.

So there are no default values for any queue with max-cap, we need this 
parameter to set this hard-limit.

Hope this helps.
Thanks

On May 21, 2018, at 2:51 PM, Greenhorn Techie 
<[email protected]<mailto:[email protected]>> wrote:

Thanks Zian. Is maximum capacity a global value i.e. whenever I specify a 
percentage here, does it take from the overall cluster’s capacity or is it only 
from the parent queue? I thought its the former.

Also, if setting to 100% doesn't cause any harm, why is it explicitly mentioned 
as a parameter instead of a default / implied value for any queue?

Thanks



On 21 May 2018 at 22:19:05, Zian Chen 
([email protected]<mailto:[email protected]>) wrote:

In my humble opinion,  it’s safe to set maximum capacity to 100% for each 
queue, cause here the value is indicating how much percentage the queue can 
have in its max capacity from its parent queue, so make the upper limit to 100% 
won’t cause hidden danger here.


On May 21, 2018, at 9:04 AM, Greenhorn Techie 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

In our setup, we are using YARN Capacity Scheduler and have many queues setup 
in a hierarchical fashion with a well configured minimum capacities. However, 
wondering what is the best practice for setting maximum capacity value i.e. for 
the parameter yarn.scheduler.capacity.<queue-path>.maximum-capacity?

Is it advisable to have each queue configured with a maximum capacity of 100% 
or something like 90 to 95% with some leeway for the default queue? In summary, 
what are the best practices to leverage maximum cluster capacity while its 
available while honouring the minimum queue capacities?

Thanks

Reply via email to