Sunil,

thank you for the answer.

This is HDP 3.1 based on Hadoop 3.1.1.
No preemption defaults were changed I believe. The only thing changed is to
enable preemption (monitor.enabled) in Ambari but I can get the full XML
for you if that's helpful. I'll get back to you on that.

Cheers,
Lars



On Wed, Oct 9, 2019 at 7:57 PM Sunil Govindan <[email protected]> wrote:

> Hi,
>
> Expectation of the scenario explained is more or less correct.
> Few informations are needed to get more clear picture.
> 1. hadoop version
> 2. capacity scheduler xml (to be precise, all preemption related configs
> which are added)
>
> - Sunil
>
> On Wed, Oct 9, 2019 at 6:23 PM Lars Francke <[email protected]>
> wrote:
>
>> Hi,
>>
>> I've got a question about behavior we're seeing.
>>
>> Two queues: Preemption enabled, CapacityScheduler (happy to provide more
>> config if needed), 50% of resources to each
>>
>> Submit a job to queue 1 which uses 100% of the cluster.
>> Submit a job to queue 2 which doesn't get allocated because there are not
>> enough resources for the AM.
>>
>> I'd have expected this to trigger preemption but it doesn't happen.
>> Is this expected?
>>
>> Second question:
>> Once we get the second job running it only gets three containers but
>> requested six. Utilization at this point is still "unfair" i.e. queue 1
>> ~75% resources, queue 2 ~25% resources. We saw two preemptions happening to
>> get to this 25% utilization but then no more.
>>
>> Any idea why this could be happening?
>>
>> Thank you!
>>
>> Lars
>>
>

Reply via email to