2009-09-16 06:54:45 +00:00
|
|
|
/*
|
|
|
|
* Only give sleepers 50% of their service deficit. This allows
|
|
|
|
* them to run sooner, but does not allow tons of sleepers to
|
|
|
|
* rip the spread apart.
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Place new tasks ahead so that they do not starve already running
|
|
|
|
* tasks
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(START_DEBIT, true)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prefer to schedule the task we woke last (assuming it failed
|
|
|
|
* wakeup-preemption), since its likely going to consume data we
|
|
|
|
* touched, increases cache locality.
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(NEXT_BUDDY, false)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prefer to schedule the task that ran last (when we did
|
|
|
|
* wake-preempt) as that likely will touch the same data, increases
|
|
|
|
* cache locality.
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(LAST_BUDDY, true)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Consider buddies to be cache hot, decreases the likelyness of a
|
|
|
|
* cache buddy being migrated away, increases cache locality.
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, true)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
2012-10-14 12:28:50 +00:00
|
|
|
/*
|
|
|
|
* Allow wakeup-time preemption of the current task:
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(WAKEUP_PREEMPTION, true)
|
|
|
|
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(HRTICK, false)
|
|
|
|
SCHED_FEAT(DOUBLE_TICK, false)
|
|
|
|
SCHED_FEAT(LB_BIAS, true)
|
2009-09-11 10:31:23 +00:00
|
|
|
|
2010-10-05 00:03:22 +00:00
|
|
|
/*
|
2014-05-27 17:50:41 +00:00
|
|
|
* Decrement CPU capacity based on time not spent running tasks
|
2010-10-05 00:03:22 +00:00
|
|
|
*/
|
2014-05-27 17:50:41 +00:00
|
|
|
SCHED_FEAT(NONTASK_CAPACITY, true)
|
2011-04-05 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Queue remote wakeups on the target CPU and process them
|
|
|
|
* using the scheduler IPI. Reduces rq->lock contention/bounces.
|
|
|
|
*/
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(TTWU_QUEUE, true)
|
2011-07-15 08:35:52 +00:00
|
|
|
|
2017-03-01 10:24:35 +00:00
|
|
|
/*
|
|
|
|
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(SIS_AVG_CPU, false)
|
|
|
|
|
2016-10-03 14:53:49 +00:00
|
|
|
/*
|
|
|
|
* Issue a WARN when we do multiple update_rq_clock() calls
|
|
|
|
* in a single rq->lock section. Default disabled because the
|
|
|
|
* annotations are not complete.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(WARN_DOUBLE_CLOCK, false)
|
|
|
|
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-18 18:49:46 +00:00
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
/*
|
|
|
|
* In order to avoid a thundering herd attack of CPUs that are
|
|
|
|
* lowering their priorities at the same time, and there being
|
|
|
|
* a single CPU that has an RT task that can migrate and is waiting
|
|
|
|
* to run, where the other CPUs will try to take that CPUs
|
|
|
|
* rq lock and possibly create a large contention, sending an
|
|
|
|
* IPI to that CPU and let that CPU push the RT task to where
|
|
|
|
* it should go may be a better scenario.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(RT_PUSH_IPI, true)
|
|
|
|
#endif
|
|
|
|
|
2011-07-06 12:20:14 +00:00
|
|
|
SCHED_FEAT(FORCE_SD_OVERLAP, false)
|
|
|
|
SCHED_FEAT(RT_RUNTIME_SHARE, true)
|
2012-04-17 11:38:40 +00:00
|
|
|
SCHED_FEAT(LB_MIN, false)
|
2015-09-11 14:10:59 +00:00
|
|
|
SCHED_FEAT(ATTACH_AGE_LOAD, true)
|
|
|
|
|