sched/fair: Propagate asynchrous detach

A task can be asynchronously detached from cfs_rq when migrating
between CPUs. The load of the migrated task is then removed from
source cfs_rq during its next update. We use this event to set
propagation flag.

During the load balance, we take advantage of the update of blocked
load to propagate any pending changes.

The propagation relies on patch:

  "sched: Fix hierarchical order in rq->leaf_cfs_rq_list"

... which orders children and parents, to ensure that it's done in one pass.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: kernellwp@gmail.com
Cc: pjt@google.com
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Vincent Guittot 2016-11-08 10:53:46 +01:00 committed by Ingo Molnar
parent 09a43ace1f
commit 4e5160766f

View File

@ -3219,6 +3219,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
sub_positive(&sa->load_avg, r); sub_positive(&sa->load_avg, r);
sub_positive(&sa->load_sum, r * LOAD_AVG_MAX); sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
removed_load = 1; removed_load = 1;
set_tg_cfs_propagate(cfs_rq);
} }
if (atomic_long_read(&cfs_rq->removed_util_avg)) { if (atomic_long_read(&cfs_rq->removed_util_avg)) {
@ -3226,6 +3227,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
sub_positive(&sa->util_avg, r); sub_positive(&sa->util_avg, r);
sub_positive(&sa->util_sum, r * LOAD_AVG_MAX); sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
removed_util = 1; removed_util = 1;
set_tg_cfs_propagate(cfs_rq);
} }
decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa,
@ -6872,6 +6874,10 @@ static void update_blocked_averages(int cpu)
if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true)) if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true))
update_tg_load_avg(cfs_rq, 0); update_tg_load_avg(cfs_rq, 0);
/* Propagate pending load changes to the parent */
if (cfs_rq->tg->se[cpu])
update_load_avg(cfs_rq->tg->se[cpu], 0);
} }
raw_spin_unlock_irqrestore(&rq->lock, flags); raw_spin_unlock_irqrestore(&rq->lock, flags);
} }