mirror of
https://github.com/torvalds/linux.git
synced 2024-11-30 16:11:38 +00:00
e260be673a
This patch implements a new version of RCU which allows its read-side critical sections to be preempted. It uses a set of counter pairs to keep track of the read-side critical sections and flips them when all tasks exit read-side critical section. The details of this implementation can be found in this paper - http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf and the article- http://lwn.net/Articles/253651/ This patch was developed as a part of the -rt kernel development and meant to provide better latencies when read-side critical sections of RCU don't disable preemption. As a consequence of keeping track of RCU readers, the readers have a slight overhead (optimizations in the paper). This implementation co-exists with the "classic" RCU implementations and can be switched to at compiler. Also includes RCU tracing summarized in debugfs. [ akpm@linux-foundation.org: build fixes on non-preempt architectures ] Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Paul E. McKenney <paulmck@us.ibm.com> Reviewed-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
76 lines
2.6 KiB
Plaintext
76 lines
2.6 KiB
Plaintext
|
|
choice
|
|
prompt "Preemption Model"
|
|
default PREEMPT_NONE
|
|
|
|
config PREEMPT_NONE
|
|
bool "No Forced Preemption (Server)"
|
|
help
|
|
This is the traditional Linux preemption model, geared towards
|
|
throughput. It will still provide good latencies most of the
|
|
time, but there are no guarantees and occasional longer delays
|
|
are possible.
|
|
|
|
Select this option if you are building a kernel for a server or
|
|
scientific/computation system, or if you want to maximize the
|
|
raw processing power of the kernel, irrespective of scheduling
|
|
latencies.
|
|
|
|
config PREEMPT_VOLUNTARY
|
|
bool "Voluntary Kernel Preemption (Desktop)"
|
|
help
|
|
This option reduces the latency of the kernel by adding more
|
|
"explicit preemption points" to the kernel code. These new
|
|
preemption points have been selected to reduce the maximum
|
|
latency of rescheduling, providing faster application reactions,
|
|
at the cost of slightly lower throughput.
|
|
|
|
This allows reaction to interactive events by allowing a
|
|
low priority process to voluntarily preempt itself even if it
|
|
is in kernel mode executing a system call. This allows
|
|
applications to run more 'smoothly' even when the system is
|
|
under load.
|
|
|
|
Select this if you are building a kernel for a desktop system.
|
|
|
|
config PREEMPT
|
|
bool "Preemptible Kernel (Low-Latency Desktop)"
|
|
help
|
|
This option reduces the latency of the kernel by making
|
|
all kernel code (that is not executing in a critical section)
|
|
preemptible. This allows reaction to interactive events by
|
|
permitting a low priority process to be preempted involuntarily
|
|
even if it is in kernel mode executing a system call and would
|
|
otherwise not be about to reach a natural preemption point.
|
|
This allows applications to run more 'smoothly' even when the
|
|
system is under load, at the cost of slightly lower throughput
|
|
and a slight runtime overhead to kernel code.
|
|
|
|
Select this if you are building a kernel for a desktop or
|
|
embedded system with latency requirements in the milliseconds
|
|
range.
|
|
|
|
endchoice
|
|
|
|
config PREEMPT_BKL
|
|
bool "Preempt The Big Kernel Lock"
|
|
depends on SMP || PREEMPT
|
|
default y
|
|
help
|
|
This option reduces the latency of the kernel by making the
|
|
big kernel lock preemptible.
|
|
|
|
Say Y here if you are building a kernel for a desktop system.
|
|
Say N if you are unsure.
|
|
|
|
config RCU_TRACE
|
|
bool "Enable tracing for RCU - currently stats in debugfs"
|
|
select DEBUG_FS
|
|
default y
|
|
help
|
|
This option provides tracing in RCU which presents stats
|
|
in debugfs for debugging RCU implementation.
|
|
|
|
Say Y here if you want to enable RCU tracing
|
|
Say N if you are unsure.
|