mirror of
https://github.com/torvalds/linux.git
synced 2024-11-06 03:51:48 +00:00
534c97b095
Pull 'full dynticks' support from Ingo Molnar: "This tree from Frederic Weisbecker adds a new, (exciting! :-) core kernel feature to the timer and scheduler subsystems: 'full dynticks', or CONFIG_NO_HZ_FULL=y. This feature extends the nohz variable-size timer tick feature from idle to busy CPUs (running at most one task) as well, potentially reducing the number of timer interrupts significantly. This feature got motivated by real-time folks and the -rt tree, but the general utility and motivation of full-dynticks runs wider than that: - HPC workloads get faster: CPUs running a single task should be able to utilize a maximum amount of CPU power. A periodic timer tick at HZ=1000 can cause a constant overhead of up to 1.0%. This feature removes that overhead - and speeds up the system by 0.5%-1.0% on typical distro configs even on modern systems. - Real-time workload latency reduction: CPUs running critical tasks should experience as little jitter as possible. The last remaining source of kernel-related jitter was the periodic timer tick. - A single task executing on a CPU is a pretty common situation, especially with an increasing number of cores/CPUs, so this feature helps desktop and mobile workloads as well. The cost of the feature is mainly related to increased timer reprogramming overhead when a CPU switches its tick period, and thus slightly longer to-idle and from-idle latency. Configuration-wise a third mode of operation is added to the existing two NOHZ kconfig modes: - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named as a config option. This is the traditional Linux periodic tick design: there's a HZ tick going on all the time, regardless of whether a CPU is idle or not. - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the periodic tick when a CPU enters idle mode. - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the tick when a CPU is idle, also slows the tick down to 1 Hz (one timer interrupt per second) when only a single task is running on a CPU. The .config behavior is compatible: existing !CONFIG_NO_HZ and CONFIG_NO_HZ=y settings get translated to the new values, without the user having to configure anything. CONFIG_NO_HZ_FULL is turned off by default. This feature is based on a lot of infrastructure work that has been steadily going upstream in the last 2-3 cycles: related RCU support and non-periodic cputime support in particular is upstream already. This tree adds the final pieces and activates the feature. The pull request is marked RFC because: - it's marked 64-bit only at the moment - the 32-bit support patch is small but did not get ready in time. - it has a number of fresh commits that came in after the merge window. The overwhelming majority of commits are from before the merge window, but still some aspects of the tree are fresh and so I marked it RFC. - it's a pretty wide-reaching feature with lots of effects - and while the components have been in testing for some time, the full combination is still not very widely used. That it's default-off should reduce its regression abilities and obviously there are no known regressions with CONFIG_NO_HZ_FULL=y enabled either. - the feature is not completely idempotent: there is no 100% equivalent replacement for a periodic scheduler/timer tick. In particular there's ongoing work to map out and reduce its effects on scheduler load-balancing and statistics. This should not impact correctness though, there are no known regressions related to this feature at this point. - it's a pretty ambitious feature that with time will likely be enabled by most Linux distros, and we'd like you to make input on its design/implementation, if you dislike some aspect we missed. Without flaming us to crisp! :-) Future plans: - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off the periodic tick altogether when there's a single busy task on a CPU. We'd first like 1 Hz to be exposed more widely before we go for the 0 Hz target though. - once we reach 0 Hz we can remove the periodic tick assumption from nr_running>=2 as well, by essentially interrupting busy tasks only as frequently as the sched_latency constraints require us to do - once every 4-40 msecs, depending on nr_running. I am personally leaning towards biting the bullet and doing this in v3.10, like the -rt tree this effort has been going on for too long - but the final word is up to you as usual. More technical details can be found in Documentation/timers/NO_HZ.txt" * 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) sched: Keep at least 1 tick per second for active dynticks tasks rcu: Fix full dynticks' dependency on wide RCU nocb mode nohz: Protect smp_processor_id() in tick_nohz_task_switch() nohz_full: Add documentation. cputime_nsecs: use math64.h for nsec resolution conversion helpers nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config nohz: Reduce overhead under high-freq idling patterns nohz: Remove full dynticks' superfluous dependency on RCU tree nohz: Fix unavailable tick_stop tracepoint in dynticks idle nohz: Add basic tracing nohz: Select wide RCU nocb for full dynticks nohz: Disable the tick when irq resume in full dynticks CPU nohz: Re-evaluate the tick for the new task after a context switch nohz: Prepare to stop the tick on irq exit nohz: Implement full dynticks kick nohz: Re-evaluate the tick from the scheduler IPI sched: New helper to prevent from stopping the tick in full dynticks sched: Kick full dynticks CPU that have more than one task enqueued. perf: New helper to prevent full dynticks CPUs from stopping tick perf: Kick full dynticks CPU if events rotation is needed ...
425 lines
9.7 KiB
C
425 lines
9.7 KiB
C
/*
|
|
* linux/kernel/time/tick-common.c
|
|
*
|
|
* This file contains the base functions to manage periodic tick
|
|
* related events.
|
|
*
|
|
* Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>
|
|
* Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar
|
|
* Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner
|
|
*
|
|
* This code is licenced under the GPL version 2. For details see
|
|
* kernel-base/COPYING.
|
|
*/
|
|
#include <linux/cpu.h>
|
|
#include <linux/err.h>
|
|
#include <linux/hrtimer.h>
|
|
#include <linux/interrupt.h>
|
|
#include <linux/percpu.h>
|
|
#include <linux/profile.h>
|
|
#include <linux/sched.h>
|
|
|
|
#include <asm/irq_regs.h>
|
|
|
|
#include "tick-internal.h"
|
|
|
|
/*
|
|
* Tick devices
|
|
*/
|
|
DEFINE_PER_CPU(struct tick_device, tick_cpu_device);
|
|
/*
|
|
* Tick next event: keeps track of the tick time
|
|
*/
|
|
ktime_t tick_next_period;
|
|
ktime_t tick_period;
|
|
int tick_do_timer_cpu __read_mostly = TICK_DO_TIMER_BOOT;
|
|
static DEFINE_RAW_SPINLOCK(tick_device_lock);
|
|
|
|
/*
|
|
* Debugging: see timer_list.c
|
|
*/
|
|
struct tick_device *tick_get_device(int cpu)
|
|
{
|
|
return &per_cpu(tick_cpu_device, cpu);
|
|
}
|
|
|
|
/**
|
|
* tick_is_oneshot_available - check for a oneshot capable event device
|
|
*/
|
|
int tick_is_oneshot_available(void)
|
|
{
|
|
struct clock_event_device *dev = __this_cpu_read(tick_cpu_device.evtdev);
|
|
|
|
if (!dev || !(dev->features & CLOCK_EVT_FEAT_ONESHOT))
|
|
return 0;
|
|
if (!(dev->features & CLOCK_EVT_FEAT_C3STOP))
|
|
return 1;
|
|
return tick_broadcast_oneshot_available();
|
|
}
|
|
|
|
/*
|
|
* Periodic tick
|
|
*/
|
|
static void tick_periodic(int cpu)
|
|
{
|
|
if (tick_do_timer_cpu == cpu) {
|
|
write_seqlock(&jiffies_lock);
|
|
|
|
/* Keep track of the next tick event */
|
|
tick_next_period = ktime_add(tick_next_period, tick_period);
|
|
|
|
do_timer(1);
|
|
write_sequnlock(&jiffies_lock);
|
|
}
|
|
|
|
update_process_times(user_mode(get_irq_regs()));
|
|
profile_tick(CPU_PROFILING);
|
|
}
|
|
|
|
/*
|
|
* Event handler for periodic ticks
|
|
*/
|
|
void tick_handle_periodic(struct clock_event_device *dev)
|
|
{
|
|
int cpu = smp_processor_id();
|
|
ktime_t next;
|
|
|
|
tick_periodic(cpu);
|
|
|
|
if (dev->mode != CLOCK_EVT_MODE_ONESHOT)
|
|
return;
|
|
/*
|
|
* Setup the next period for devices, which do not have
|
|
* periodic mode:
|
|
*/
|
|
next = ktime_add(dev->next_event, tick_period);
|
|
for (;;) {
|
|
if (!clockevents_program_event(dev, next, false))
|
|
return;
|
|
/*
|
|
* Have to be careful here. If we're in oneshot mode,
|
|
* before we call tick_periodic() in a loop, we need
|
|
* to be sure we're using a real hardware clocksource.
|
|
* Otherwise we could get trapped in an infinite
|
|
* loop, as the tick_periodic() increments jiffies,
|
|
* when then will increment time, posibly causing
|
|
* the loop to trigger again and again.
|
|
*/
|
|
if (timekeeping_valid_for_hres())
|
|
tick_periodic(cpu);
|
|
next = ktime_add(next, tick_period);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Setup the device for a periodic tick
|
|
*/
|
|
void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
|
|
{
|
|
tick_set_periodic_handler(dev, broadcast);
|
|
|
|
/* Broadcast setup ? */
|
|
if (!tick_device_is_functional(dev))
|
|
return;
|
|
|
|
if ((dev->features & CLOCK_EVT_FEAT_PERIODIC) &&
|
|
!tick_broadcast_oneshot_active()) {
|
|
clockevents_set_mode(dev, CLOCK_EVT_MODE_PERIODIC);
|
|
} else {
|
|
unsigned long seq;
|
|
ktime_t next;
|
|
|
|
do {
|
|
seq = read_seqbegin(&jiffies_lock);
|
|
next = tick_next_period;
|
|
} while (read_seqretry(&jiffies_lock, seq));
|
|
|
|
clockevents_set_mode(dev, CLOCK_EVT_MODE_ONESHOT);
|
|
|
|
for (;;) {
|
|
if (!clockevents_program_event(dev, next, false))
|
|
return;
|
|
next = ktime_add(next, tick_period);
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Setup the tick device
|
|
*/
|
|
static void tick_setup_device(struct tick_device *td,
|
|
struct clock_event_device *newdev, int cpu,
|
|
const struct cpumask *cpumask)
|
|
{
|
|
ktime_t next_event;
|
|
void (*handler)(struct clock_event_device *) = NULL;
|
|
|
|
/*
|
|
* First device setup ?
|
|
*/
|
|
if (!td->evtdev) {
|
|
/*
|
|
* If no cpu took the do_timer update, assign it to
|
|
* this cpu:
|
|
*/
|
|
if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
|
|
if (!tick_nohz_full_cpu(cpu))
|
|
tick_do_timer_cpu = cpu;
|
|
else
|
|
tick_do_timer_cpu = TICK_DO_TIMER_NONE;
|
|
tick_next_period = ktime_get();
|
|
tick_period = ktime_set(0, NSEC_PER_SEC / HZ);
|
|
}
|
|
|
|
/*
|
|
* Startup in periodic mode first.
|
|
*/
|
|
td->mode = TICKDEV_MODE_PERIODIC;
|
|
} else {
|
|
handler = td->evtdev->event_handler;
|
|
next_event = td->evtdev->next_event;
|
|
td->evtdev->event_handler = clockevents_handle_noop;
|
|
}
|
|
|
|
td->evtdev = newdev;
|
|
|
|
/*
|
|
* When the device is not per cpu, pin the interrupt to the
|
|
* current cpu:
|
|
*/
|
|
if (!cpumask_equal(newdev->cpumask, cpumask))
|
|
irq_set_affinity(newdev->irq, cpumask);
|
|
|
|
/*
|
|
* When global broadcasting is active, check if the current
|
|
* device is registered as a placeholder for broadcast mode.
|
|
* This allows us to handle this x86 misfeature in a generic
|
|
* way.
|
|
*/
|
|
if (tick_device_uses_broadcast(newdev, cpu))
|
|
return;
|
|
|
|
if (td->mode == TICKDEV_MODE_PERIODIC)
|
|
tick_setup_periodic(newdev, 0);
|
|
else
|
|
tick_setup_oneshot(newdev, handler, next_event);
|
|
}
|
|
|
|
/*
|
|
* Check, if the new registered device should be used.
|
|
*/
|
|
static int tick_check_new_device(struct clock_event_device *newdev)
|
|
{
|
|
struct clock_event_device *curdev;
|
|
struct tick_device *td;
|
|
int cpu, ret = NOTIFY_OK;
|
|
unsigned long flags;
|
|
|
|
raw_spin_lock_irqsave(&tick_device_lock, flags);
|
|
|
|
cpu = smp_processor_id();
|
|
if (!cpumask_test_cpu(cpu, newdev->cpumask))
|
|
goto out_bc;
|
|
|
|
td = &per_cpu(tick_cpu_device, cpu);
|
|
curdev = td->evtdev;
|
|
|
|
/* cpu local device ? */
|
|
if (!cpumask_equal(newdev->cpumask, cpumask_of(cpu))) {
|
|
|
|
/*
|
|
* If the cpu affinity of the device interrupt can not
|
|
* be set, ignore it.
|
|
*/
|
|
if (!irq_can_set_affinity(newdev->irq))
|
|
goto out_bc;
|
|
|
|
/*
|
|
* If we have a cpu local device already, do not replace it
|
|
* by a non cpu local device
|
|
*/
|
|
if (curdev && cpumask_equal(curdev->cpumask, cpumask_of(cpu)))
|
|
goto out_bc;
|
|
}
|
|
|
|
/*
|
|
* If we have an active device, then check the rating and the oneshot
|
|
* feature.
|
|
*/
|
|
if (curdev) {
|
|
/*
|
|
* Prefer one shot capable devices !
|
|
*/
|
|
if ((curdev->features & CLOCK_EVT_FEAT_ONESHOT) &&
|
|
!(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
|
|
goto out_bc;
|
|
/*
|
|
* Check the rating
|
|
*/
|
|
if (curdev->rating >= newdev->rating)
|
|
goto out_bc;
|
|
}
|
|
|
|
/*
|
|
* Replace the eventually existing device by the new
|
|
* device. If the current device is the broadcast device, do
|
|
* not give it back to the clockevents layer !
|
|
*/
|
|
if (tick_is_broadcast_device(curdev)) {
|
|
clockevents_shutdown(curdev);
|
|
curdev = NULL;
|
|
}
|
|
clockevents_exchange_device(curdev, newdev);
|
|
tick_setup_device(td, newdev, cpu, cpumask_of(cpu));
|
|
if (newdev->features & CLOCK_EVT_FEAT_ONESHOT)
|
|
tick_oneshot_notify();
|
|
|
|
raw_spin_unlock_irqrestore(&tick_device_lock, flags);
|
|
return NOTIFY_STOP;
|
|
|
|
out_bc:
|
|
/*
|
|
* Can the new device be used as a broadcast device ?
|
|
*/
|
|
if (tick_check_broadcast_device(newdev))
|
|
ret = NOTIFY_STOP;
|
|
|
|
raw_spin_unlock_irqrestore(&tick_device_lock, flags);
|
|
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* Transfer the do_timer job away from a dying cpu.
|
|
*
|
|
* Called with interrupts disabled.
|
|
*/
|
|
static void tick_handover_do_timer(int *cpup)
|
|
{
|
|
if (*cpup == tick_do_timer_cpu) {
|
|
int cpu = cpumask_first(cpu_online_mask);
|
|
|
|
tick_do_timer_cpu = (cpu < nr_cpu_ids) ? cpu :
|
|
TICK_DO_TIMER_NONE;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Shutdown an event device on a given cpu:
|
|
*
|
|
* This is called on a life CPU, when a CPU is dead. So we cannot
|
|
* access the hardware device itself.
|
|
* We just set the mode and remove it from the lists.
|
|
*/
|
|
static void tick_shutdown(unsigned int *cpup)
|
|
{
|
|
struct tick_device *td = &per_cpu(tick_cpu_device, *cpup);
|
|
struct clock_event_device *dev = td->evtdev;
|
|
unsigned long flags;
|
|
|
|
raw_spin_lock_irqsave(&tick_device_lock, flags);
|
|
td->mode = TICKDEV_MODE_PERIODIC;
|
|
if (dev) {
|
|
/*
|
|
* Prevent that the clock events layer tries to call
|
|
* the set mode function!
|
|
*/
|
|
dev->mode = CLOCK_EVT_MODE_UNUSED;
|
|
clockevents_exchange_device(dev, NULL);
|
|
dev->event_handler = clockevents_handle_noop;
|
|
td->evtdev = NULL;
|
|
}
|
|
raw_spin_unlock_irqrestore(&tick_device_lock, flags);
|
|
}
|
|
|
|
static void tick_suspend(void)
|
|
{
|
|
struct tick_device *td = &__get_cpu_var(tick_cpu_device);
|
|
unsigned long flags;
|
|
|
|
raw_spin_lock_irqsave(&tick_device_lock, flags);
|
|
clockevents_shutdown(td->evtdev);
|
|
raw_spin_unlock_irqrestore(&tick_device_lock, flags);
|
|
}
|
|
|
|
static void tick_resume(void)
|
|
{
|
|
struct tick_device *td = &__get_cpu_var(tick_cpu_device);
|
|
unsigned long flags;
|
|
int broadcast = tick_resume_broadcast();
|
|
|
|
raw_spin_lock_irqsave(&tick_device_lock, flags);
|
|
clockevents_set_mode(td->evtdev, CLOCK_EVT_MODE_RESUME);
|
|
|
|
if (!broadcast) {
|
|
if (td->mode == TICKDEV_MODE_PERIODIC)
|
|
tick_setup_periodic(td->evtdev, 0);
|
|
else
|
|
tick_resume_oneshot();
|
|
}
|
|
raw_spin_unlock_irqrestore(&tick_device_lock, flags);
|
|
}
|
|
|
|
/*
|
|
* Notification about clock event devices
|
|
*/
|
|
static int tick_notify(struct notifier_block *nb, unsigned long reason,
|
|
void *dev)
|
|
{
|
|
switch (reason) {
|
|
|
|
case CLOCK_EVT_NOTIFY_ADD:
|
|
return tick_check_new_device(dev);
|
|
|
|
case CLOCK_EVT_NOTIFY_BROADCAST_ON:
|
|
case CLOCK_EVT_NOTIFY_BROADCAST_OFF:
|
|
case CLOCK_EVT_NOTIFY_BROADCAST_FORCE:
|
|
tick_broadcast_on_off(reason, dev);
|
|
break;
|
|
|
|
case CLOCK_EVT_NOTIFY_BROADCAST_ENTER:
|
|
case CLOCK_EVT_NOTIFY_BROADCAST_EXIT:
|
|
tick_broadcast_oneshot_control(reason);
|
|
break;
|
|
|
|
case CLOCK_EVT_NOTIFY_CPU_DYING:
|
|
tick_handover_do_timer(dev);
|
|
break;
|
|
|
|
case CLOCK_EVT_NOTIFY_CPU_DEAD:
|
|
tick_shutdown_broadcast_oneshot(dev);
|
|
tick_shutdown_broadcast(dev);
|
|
tick_shutdown(dev);
|
|
break;
|
|
|
|
case CLOCK_EVT_NOTIFY_SUSPEND:
|
|
tick_suspend();
|
|
tick_suspend_broadcast();
|
|
break;
|
|
|
|
case CLOCK_EVT_NOTIFY_RESUME:
|
|
tick_resume();
|
|
break;
|
|
|
|
default:
|
|
break;
|
|
}
|
|
|
|
return NOTIFY_OK;
|
|
}
|
|
|
|
static struct notifier_block tick_notifier = {
|
|
.notifier_call = tick_notify,
|
|
};
|
|
|
|
/**
|
|
* tick_init - initialize the tick control
|
|
*
|
|
* Register the notifier with the clockevents framework
|
|
*/
|
|
void __init tick_init(void)
|
|
{
|
|
clockevents_register_notifier(&tick_notifier);
|
|
tick_broadcast_init();
|
|
}
|