doc: Remove RCU Tasks Rude asynchronous APIs

The call_rcu_tasks_rude() and rcu_barrier_tasks_rude() APIs are no longer.
This commit therefore removes them from the documentation.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
This commit is contained in:
Paul E. McKenney 2024-07-03 19:52:29 -07:00 committed by Neeraj Upadhyay
parent 9a13a324f4
commit 0ff92d145a
4 changed files with 30 additions and 44 deletions

View File

@ -2649,8 +2649,7 @@ those that are idle from RCU's perspective) and then Tasks Rude RCU can
be removed from the kernel. be removed from the kernel.
The tasks-rude-RCU API is also reader-marking-free and thus quite compact, The tasks-rude-RCU API is also reader-marking-free and thus quite compact,
consisting of call_rcu_tasks_rude(), synchronize_rcu_tasks_rude(), consisting solely of synchronize_rcu_tasks_rude().
and rcu_barrier_tasks_rude().
Tasks Trace RCU Tasks Trace RCU
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~

View File

@ -194,14 +194,13 @@ over a rather long period of time, but improvements are always welcome!
when publicizing a pointer to a structure that can when publicizing a pointer to a structure that can
be traversed by an RCU read-side critical section. be traversed by an RCU read-side critical section.
5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), or
call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used, call_rcu_tasks_trace() is used, the callback function may be
the callback function may be invoked from softirq context, invoked from softirq context, and in any case with bottom halves
and in any case with bottom halves disabled. In particular, disabled. In particular, this callback function cannot block.
this callback function cannot block. If you need the callback If you need the callback to block, run that code in a workqueue
to block, run that code in a workqueue handler scheduled from handler scheduled from the callback. The queue_rcu_work()
the callback. The queue_rcu_work() function does this for you function does this for you in the case of call_rcu().
in the case of call_rcu().
6. Since synchronize_rcu() can block, it cannot be called 6. Since synchronize_rcu() can block, it cannot be called
from any sort of irq context. The same rule applies from any sort of irq context. The same rule applies
@ -254,10 +253,10 @@ over a rather long period of time, but improvements are always welcome!
corresponding readers must use rcu_read_lock_trace() corresponding readers must use rcu_read_lock_trace()
and rcu_read_unlock_trace(). and rcu_read_unlock_trace().
c. If an updater uses call_rcu_tasks_rude() or c. If an updater uses synchronize_rcu_tasks_rude(),
synchronize_rcu_tasks_rude(), then the corresponding then the corresponding readers must use anything that
readers must use anything that disables preemption, disables preemption, for example, preempt_disable()
for example, preempt_disable() and preempt_enable(). and preempt_enable().
Mixing things up will result in confusion and broken kernels, and Mixing things up will result in confusion and broken kernels, and
has even resulted in an exploitable security issue. Therefore, has even resulted in an exploitable security issue. Therefore,
@ -326,11 +325,9 @@ over a rather long period of time, but improvements are always welcome!
d. Periodically invoke rcu_barrier(), permitting a limited d. Periodically invoke rcu_barrier(), permitting a limited
number of updates per grace period. number of updates per grace period.
The same cautions apply to call_srcu(), call_rcu_tasks(), The same cautions apply to call_srcu(), call_rcu_tasks(), and
call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is call_rcu_tasks_trace(). This is why there is an srcu_barrier(),
why there is an srcu_barrier(), rcu_barrier_tasks(), rcu_barrier_tasks(), and rcu_barrier_tasks_trace(), respectively.
rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
respectively.
Note that although these primitives do take action to avoid Note that although these primitives do take action to avoid
memory exhaustion when any given CPU has too many callbacks, memory exhaustion when any given CPU has too many callbacks,
@ -383,17 +380,17 @@ over a rather long period of time, but improvements are always welcome!
must use whatever locking or other synchronization is required must use whatever locking or other synchronization is required
to safely access and/or modify that data structure. to safely access and/or modify that data structure.
Do not assume that RCU callbacks will be executed on Do not assume that RCU callbacks will be executed on the same
the same CPU that executed the corresponding call_rcu(), CPU that executed the corresponding call_rcu(), call_srcu(),
call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), or call_rcu_tasks(), or call_rcu_tasks_trace(). For example, if
call_rcu_tasks_trace(). For example, if a given CPU goes offline a given CPU goes offline while having an RCU callback pending,
while having an RCU callback pending, then that RCU callback then that RCU callback will execute on some surviving CPU.
will execute on some surviving CPU. (If this was not the case, (If this was not the case, a self-spawning RCU callback would
a self-spawning RCU callback would prevent the victim CPU from prevent the victim CPU from ever going offline.) Furthermore,
ever going offline.) Furthermore, CPUs designated by rcu_nocbs= CPUs designated by rcu_nocbs= might well *always* have their
might well *always* have their RCU callbacks executed on some RCU callbacks executed on some other CPUs, in fact, for some
other CPUs, in fact, for some real-time workloads, this is the real-time workloads, this is the whole point of using the
whole point of using the rcu_nocbs= kernel boot parameter. rcu_nocbs= kernel boot parameter.
In addition, do not assume that callbacks queued in a given order In addition, do not assume that callbacks queued in a given order
will be invoked in that order, even if they all are queued on the will be invoked in that order, even if they all are queued on the
@ -507,9 +504,9 @@ over a rather long period of time, but improvements are always welcome!
These debugging aids can help you find problems that are These debugging aids can help you find problems that are
otherwise extremely difficult to spot. otherwise extremely difficult to spot.
17. If you pass a callback function defined within a module to one of 17. If you pass a callback function defined within a module
call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), to one of call_rcu(), call_srcu(), call_rcu_tasks(), or
or call_rcu_tasks_trace(), then it is necessary to wait for all call_rcu_tasks_trace(), then it is necessary to wait for all
pending callbacks to be invoked before unloading that module. pending callbacks to be invoked before unloading that module.
Note that it is absolutely *not* sufficient to wait for a grace Note that it is absolutely *not* sufficient to wait for a grace
period! For example, synchronize_rcu() implementation is *not* period! For example, synchronize_rcu() implementation is *not*
@ -522,7 +519,6 @@ over a rather long period of time, but improvements are always welcome!
- call_rcu() -> rcu_barrier() - call_rcu() -> rcu_barrier()
- call_srcu() -> srcu_barrier() - call_srcu() -> srcu_barrier()
- call_rcu_tasks() -> rcu_barrier_tasks() - call_rcu_tasks() -> rcu_barrier_tasks()
- call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
- call_rcu_tasks_trace() -> rcu_barrier_tasks_trace() - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
However, these barrier functions are absolutely *not* guaranteed However, these barrier functions are absolutely *not* guaranteed
@ -539,7 +535,6 @@ over a rather long period of time, but improvements are always welcome!
- Either synchronize_srcu() or synchronize_srcu_expedited(), - Either synchronize_srcu() or synchronize_srcu_expedited(),
together with and srcu_barrier() together with and srcu_barrier()
- synchronize_rcu_tasks() and rcu_barrier_tasks() - synchronize_rcu_tasks() and rcu_barrier_tasks()
- synchronize_tasks_rude() and rcu_barrier_tasks_rude()
- synchronize_tasks_trace() and rcu_barrier_tasks_trace() - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
If necessary, you can use something like workqueues to execute If necessary, you can use something like workqueues to execute

View File

@ -1103,7 +1103,7 @@ RCU-Tasks-Rude::
Critical sections Grace period Barrier Critical sections Grace period Barrier
N/A call_rcu_tasks_rude rcu_barrier_tasks_rude N/A N/A
synchronize_rcu_tasks_rude synchronize_rcu_tasks_rude

View File

@ -5572,14 +5572,6 @@
of zero will disable batching. Batching is of zero will disable batching. Batching is
always disabled for synchronize_rcu_tasks(). always disabled for synchronize_rcu_tasks().
rcupdate.rcu_tasks_rude_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Rude asynchronous callback batching for
call_rcu_tasks_rude(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_rude().
rcupdate.rcu_tasks_trace_lazy_ms= [KNL] rcupdate.rcu_tasks_trace_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks Set timeout in milliseconds RCU Tasks
Trace asynchronous callback batching for Trace asynchronous callback batching for