rcu: Disable run-time single-CPU grace-period optimization
The run-time single-CPU grace-period optimization applies only to kernels built with CONFIG_SMP=y && CONFIG_PREEMPTION=y that are running on a single-CPU system. But a kernel intended for a single-CPU system should instead be built with CONFIG_SMP=n, and in any case, single-CPU systems running Linux no longer appear to be the common case. Plus this optimization results in the rcu_gp_oldstate structure being half again larger than it needs to be. This commit therefore disables the run-time single-CPU grace-period optimization, so that this optimization applies only during the pre-scheduler portion of the boot sequence. Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
This commit is contained in:
parent
8df13f0160
commit
258f887aba
@ -3423,42 +3423,20 @@ void __init kfree_rcu_scheduler_running(void)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* During early boot, any blocking grace-period wait automatically
|
* During early boot, any blocking grace-period wait automatically
|
||||||
* implies a grace period. Later on, this is never the case for PREEMPTION.
|
* implies a grace period.
|
||||||
*
|
*
|
||||||
* However, because a context switch is a grace period for !PREEMPTION, any
|
* Later on, this could in theory be the case for kernels built with
|
||||||
* blocking grace-period wait automatically implies a grace period if
|
* CONFIG_SMP=y && CONFIG_PREEMPTION=y running on a single CPU, but this
|
||||||
* there is only one CPU online at any point time during execution of
|
* is not a common case. Furthermore, this optimization would cause
|
||||||
* either synchronize_rcu() or synchronize_rcu_expedited(). It is OK to
|
* the rcu_gp_oldstate structure to expand by 50%, so this potential
|
||||||
* occasionally incorrectly indicate that there are multiple CPUs online
|
* grace-period optimization is ignored once the scheduler is running.
|
||||||
* when there was in fact only one the whole time, as this just adds some
|
|
||||||
* overhead: RCU still operates correctly.
|
|
||||||
*/
|
*/
|
||||||
static int rcu_blocking_is_gp(void)
|
static int rcu_blocking_is_gp(void)
|
||||||
{
|
{
|
||||||
int ret;
|
if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE)
|
||||||
|
return false;
|
||||||
// Invoking preempt_model_*() too early gets a splat.
|
|
||||||
if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE ||
|
|
||||||
preempt_model_full() || preempt_model_rt())
|
|
||||||
return rcu_scheduler_active == RCU_SCHEDULER_INACTIVE;
|
|
||||||
might_sleep(); /* Check for RCU read-side critical section. */
|
might_sleep(); /* Check for RCU read-side critical section. */
|
||||||
preempt_disable();
|
return true;
|
||||||
/*
|
|
||||||
* If the rcu_state.n_online_cpus counter is equal to one,
|
|
||||||
* there is only one CPU, and that CPU sees all prior accesses
|
|
||||||
* made by any CPU that was online at the time of its access.
|
|
||||||
* Furthermore, if this counter is equal to one, its value cannot
|
|
||||||
* change until after the preempt_enable() below.
|
|
||||||
*
|
|
||||||
* Furthermore, if rcu_state.n_online_cpus is equal to one here,
|
|
||||||
* all later CPUs (both this one and any that come online later
|
|
||||||
* on) are guaranteed to see all accesses prior to this point
|
|
||||||
* in the code, without the need for additional memory barriers.
|
|
||||||
* Those memory barriers are provided by CPU-hotplug code.
|
|
||||||
*/
|
|
||||||
ret = READ_ONCE(rcu_state.n_online_cpus) <= 1;
|
|
||||||
preempt_enable();
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
Loading…
Reference in New Issue
Block a user