sched/deadline: Support DL task migration during CPU hotplug

I observed that DL tasks can't be migrated to other CPUs during CPU
hotplug, in addition, task may/may not be running again if CPU is
added back.

The root cause which I found is that DL tasks will be throtted and
removed from the DL rq after comsuming all their budget, which
leads to the situation that stop task can't pick them up from the
DL rq and migrate them to other CPUs during hotplug.

The method to reproduce:

  schedtool -E -t 50000:100000 -e ./test

Actually './test' is just a simple for loop. Then observe which CPU the
test task is on and offline it:

  echo 0 > /sys/devices/system/cpu/cpuN/online

This patch adds the DL task migration during CPU hotplug by finding a
most suitable later deadline rq after DL timer fires if current rq is
offline.

If it fails to find a suitable later deadline rq then it falls back to
any eligible online CPU in so that the deadline task will come back
to us, and the push/pull mechanism should then move it around properly.

Suggested-and-Acked-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427411315-4298-1-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Wanpeng Li 2015-03-27 07:08:35 +08:00 committed by Ingo Molnar
parent 3c18d447b3
commit fa9c9d10e9

View File

@ -218,6 +218,52 @@ static inline void set_post_schedule(struct rq *rq)
rq->post_schedule = has_pushable_dl_tasks(rq);
}
static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq);
static void dl_task_offline_migration(struct rq *rq, struct task_struct *p)
{
struct rq *later_rq = NULL;
bool fallback = false;
later_rq = find_lock_later_rq(p, rq);
if (!later_rq) {
int cpu;
/*
* If we cannot preempt any rq, fall back to pick any
* online cpu.
*/
fallback = true;
cpu = cpumask_any_and(cpu_active_mask, tsk_cpus_allowed(p));
if (cpu >= nr_cpu_ids) {
/*
* Fail to find any suitable cpu.
* The task will never come back!
*/
BUG_ON(dl_bandwidth_enabled());
/*
* If admission control is disabled we
* try a little harder to let the task
* run.
*/
cpu = cpumask_any(cpu_active_mask);
}
later_rq = cpu_rq(cpu);
double_lock_balance(rq, later_rq);
}
deactivate_task(rq, p, 0);
set_task_cpu(p, later_rq->cpu);
activate_task(later_rq, p, ENQUEUE_REPLENISH);
if (!fallback)
resched_curr(later_rq);
double_unlock_balance(rq, later_rq);
}
#else
static inline
@ -536,6 +582,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
sched_clock_tick();
update_rq_clock(rq);
#ifdef CONFIG_SMP
/*
* If we find that the rq the task was on is no longer
* available, we need to select a new rq.
*/
if (unlikely(!rq->online)) {
dl_task_offline_migration(rq, p);
goto unlock;
}
#endif
/*
* If the throttle happened during sched-out; like:
*