On 12 May 2016 at 21:42, Yuyang Du <[email protected]> wrote:
> On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote:
>> Commit-ID:  1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> Gitweb:     
>> http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> Author:     Peter Zijlstra <[email protected]>
>> AuthorDate: Fri, 6 May 2016 12:21:23 +0200
>> Committer:  Ingo Molnar <[email protected]>
>> CommitDate: Thu, 12 May 2016 09:55:33 +0200
>>
>> sched/fair: Clean up scale confusion
>>
>> Wanpeng noted that the scale_load_down() in calculate_imbalance() was
>> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going
>> to compare against busiest->group_capacity, which is in [capacity]
>> units.

In fact, load_above_capacity is only about load and not about capacity.

load_above_capacity -= busiest->group_capacity is an optimization (may
be a wronf one) of
load_above_capacity -= busiest->group_capacity * SCHED_LOAD_SCALE /
SCHED_CAPACITY_SCALE

so we subtract load to load

>>
>> Reported-by: Wanpeng Li <[email protected]>
>> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
>> Cc: Linus Torvalds <[email protected]>
>> Cc: Mike Galbraith <[email protected]>
>> Cc: Morten Rasmussen <[email protected]>
>> Cc: Peter Zijlstra <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: Yuyang Du <[email protected]>
>> Cc: [email protected]
>> Signed-off-by: Ingo Molnar <[email protected]>
>
> It is good that this issue is addressed and patch merged, however, for the
> record, Vincent has already had a solution for this, and we had a patch,
> including other cleanups (the latest version is: 
> https://lkml.org/lkml/2016/5/3/925).
> And I think Ben first pointed this out (and we then attempted to address it)
> as far as I can tell.

Reply via email to