forked from Minki/linux
sched/fair: Cure calc_cfs_shares() vs. reweight_entity()
Vincent reported that when running in a cgroup, his root cfs_rq->avg.load_avg dropped to 0 on task idle. This is because reweight_entity() will now immediately propagate the weight change of the group entity to its cfs_rq, and as it happens, our approxmation (5) for calc_cfs_shares() results in 0 when the group is idle. Avoid this by using the correct (3) as a lower bound on (5). This way the empty cgroup will slowly decay instead of instantly drop to 0. Reported-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
cef27403cb
commit
3d4b60d3e3
@ -2763,11 +2763,10 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq)
|
||||
tg_shares = READ_ONCE(tg->shares);
|
||||
|
||||
/*
|
||||
* This really should be: cfs_rq->avg.load_avg, but instead we use
|
||||
* cfs_rq->load.weight, which is its upper bound. This helps ramp up
|
||||
* the shares for small weight interactive tasks.
|
||||
* Because (5) drops to 0 when the cfs_rq is idle, we need to use (3)
|
||||
* as a lower bound.
|
||||
*/
|
||||
load = scale_load_down(cfs_rq->load.weight);
|
||||
load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
|
||||
|
||||
tg_weight = atomic_long_read(&tg->load_avg);
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user