mirror of
https://github.com/torvalds/linux.git
synced 2024-11-24 21:21:41 +00:00
Updates for timers and timekeeping:
- Core: - Overhaul of posix-timers in preparation of removing the workaround for periodic timers which have signal delivery ignored. - Remove the historical extra jiffie in msleep() msleep() adds an extra jiffie to the timeout value to ensure minimal sleep time. The timer wheel ensures minimal sleep time since the large rewrite to a non-cascading wheel, but the extra jiffie in msleep() remained unnoticed. Remove it. - Make the timer slack handling correct for realtime tasks. The procfs interface is inconsistent and does neither reflect reality nor conforms to the man page. Show the correct 0 slack for real time tasks and enforce it at the core level instead of having inconsistent individual checks in various timer setup functions. - The usual set of updates and enhancements all over the place. - Drivers: - Allow the ACPI PM timer to be turned off during suspend - No new drivers - The usual updates and enhancements in various drivers -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmbn7jQTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYobqnD/9COlU0nwsulABI/aNIrsh6iYvnCC9v 14CcNta7Qn+157Wfw9BWOyHdNhR1/fPCXE8jJ71zTyIOeW27HV2JyTtxTwe9ZcdK ViHAaj7YcIjcVUEC3StCoRCPnvLslEw4qJA5AOQuDyMivdQn+YVa2c0baJxKaXZt xk4HZdMj4NAS0jRKnoZSwtKW/+Oz6rR4GAWrZo+Zs1/8ur3HfqnQfi8lJ1hJtLLW V7XDCVRvamVi6Ah3ocYPPp/1P6yeQDA1ge9aMddqaza5STWISXRtSnFMUmYP3rbS FaL8TyL+ilfny8pkGB2WlG6nLuSbtvogtdEh1gG1k1RmZt44kAtk8ba/KiWFPBSb zK9cjojRMBS71f9G4kmb5F4rnXoLsg1YbD1Nzhz3wq2Cs1Z90dc2QwMren0zoQ1x Fn56ueRyAiagBlnrSaKyso/2RvqJTNoSdi3RkpjYeAph0UoDCqvTvKjGAf1mWiw1 T/1lUWSVqWHnzZbM7XXzzajIN9bl6A7bbqlcAJ2O9vZIDt7273DG+bQym9Vh6Why 0LTGGERHxzKBsG7WRg+2Gmvv6S18UPKRo8tLtlA758rHlFuPTZCShWrIriwSNl1K Hxon+d4BparSnm1h9W/NHPKJA574UbWRCBjdk58IkAj8DxZZY4ORD9SMP+ggkV7G F6p9cgoDNP9KFg== =jE0N -----END PGP SIGNATURE----- Merge tag 'timers-core-2024-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer updates from Thomas Gleixner: "Core: - Overhaul of posix-timers in preparation of removing the workaround for periodic timers which have signal delivery ignored. - Remove the historical extra jiffie in msleep() msleep() adds an extra jiffie to the timeout value to ensure minimal sleep time. The timer wheel ensures minimal sleep time since the large rewrite to a non-cascading wheel, but the extra jiffie in msleep() remained unnoticed. Remove it. - Make the timer slack handling correct for realtime tasks. The procfs interface is inconsistent and does neither reflect reality nor conforms to the man page. Show the correct 0 slack for real time tasks and enforce it at the core level instead of having inconsistent individual checks in various timer setup functions. - The usual set of updates and enhancements all over the place. Drivers: - Allow the ACPI PM timer to be turned off during suspend - No new drivers - The usual updates and enhancements in various drivers" * tag 'timers-core-2024-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits) ntp: Make sure RTC is synchronized when time goes backwards treewide: Fix wrong singular form of jiffies in comments cpu: Use already existing usleep_range() timers: Rename next_expiry_recalc() to be unique platform/x86:intel/pmc: Fix comment for the pmc_core_acpi_pm_timer_suspend_resume function clocksource/drivers/jcore: Use request_percpu_irq() clocksource/drivers/cadence-ttc: Add missing clk_disable_unprepare in ttc_setup_clockevent clocksource/drivers/asm9260: Add missing clk_disable_unprepare in asm9260_timer_init clocksource/drivers/qcom: Add missing iounmap() on errors in msm_dt_timer_init() clocksource/drivers/ingenic: Use devm_clk_get_enabled() helpers platform/x86:intel/pmc: Enable the ACPI PM Timer to be turned off when suspended clocksource: acpi_pm: Add external callback for suspend/resume clocksource/drivers/arm_arch_timer: Using for_each_available_child_of_node_scoped() dt-bindings: timer: rockchip: Add rk3576 compatible timers: Annotate possible non critical data race of next_expiry timers: Remove historical extra jiffie for timeout in msleep() hrtimer: Use and report correct timerslack values for realtime tasks hrtimer: Annotate hrtimer_cpu_base_.*_expiry() for sparse. timers: Add sparse annotation for timer_sync_wait_running(). signal: Replace BUG_ON()s ...
This commit is contained in:
commit
9ea925c806
@ -328,7 +328,7 @@ and an HDMI input, one input for each input type. Those are described in more
|
||||
detail below.
|
||||
|
||||
Special attention has been given to the rate at which new frames become
|
||||
available. The jitter will be around 1 jiffie (that depends on the HZ
|
||||
available. The jitter will be around 1 jiffy (that depends on the HZ
|
||||
configuration of your kernel, so usually 1/100, 1/250 or 1/1000 of a second),
|
||||
but the long-term behavior is exactly following the framerate. So a
|
||||
framerate of 59.94 Hz is really different from 60 Hz. If the framerate
|
||||
|
@ -24,6 +24,7 @@ properties:
|
||||
- rockchip,rk3228-timer
|
||||
- rockchip,rk3229-timer
|
||||
- rockchip,rk3368-timer
|
||||
- rockchip,rk3576-timer
|
||||
- rockchip,rk3588-timer
|
||||
- rockchip,px30-timer
|
||||
- const: rockchip,rk3288-timer
|
||||
|
@ -19,7 +19,7 @@ it really need to delay in atomic context?" If so...
|
||||
|
||||
ATOMIC CONTEXT:
|
||||
You must use the `*delay` family of functions. These
|
||||
functions use the jiffie estimation of clock speed
|
||||
functions use the jiffy estimation of clock speed
|
||||
and will busy wait for enough loop cycles to achieve
|
||||
the desired delay:
|
||||
|
||||
|
@ -109,7 +109,7 @@ para que se ejecute, y la tarea en ejecución es interrumpida.
|
||||
==================================
|
||||
|
||||
CFS usa una granularidad de nanosegundos y no depende de ningún
|
||||
jiffie o detalles como HZ. De este modo, el gestor de tareas CFS no tiene
|
||||
jiffy o detalles como HZ. De este modo, el gestor de tareas CFS no tiene
|
||||
noción de "ventanas de tiempo" de la forma en que tenía el gestor de
|
||||
tareas previo, y tampoco tiene heurísticos. Únicamente hay un parámetro
|
||||
central ajustable (se ha de cambiar en CONFIG_SCHED_DEBUG):
|
||||
|
@ -73,7 +73,7 @@
|
||||
|
||||
/*
|
||||
* Even though the SPC takes max 3-5 ms to complete any OPP/COMMS
|
||||
* operation, the operation could start just before jiffie is about
|
||||
* operation, the operation could start just before jiffy is about
|
||||
* to be incremented. So setting timeout value of 20ms = 2jiffies@100Hz
|
||||
*/
|
||||
#define TIMEOUT_US 20000
|
||||
|
@ -106,7 +106,7 @@ void __init q40_init_IRQ(void)
|
||||
* this stuff doesn't really belong here..
|
||||
*/
|
||||
|
||||
int ql_ticks; /* 200Hz ticks since last jiffie */
|
||||
int ql_ticks; /* 200Hz ticks since last jiffy */
|
||||
static int sound_ticks;
|
||||
|
||||
#define SVOL 45
|
||||
|
@ -314,7 +314,7 @@ static ssize_t mce_chrdev_write(struct file *filp, const char __user *ubuf,
|
||||
|
||||
/*
|
||||
* Need to give user space some time to set everything up,
|
||||
* so do it a jiffie or two later everywhere.
|
||||
* so do it a jiffy or two later everywhere.
|
||||
*/
|
||||
schedule_timeout(2);
|
||||
|
||||
|
@ -980,7 +980,7 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
start_get(ssif_info);
|
||||
} else {
|
||||
/* Wait a jiffie then request the next message */
|
||||
/* Wait a jiffy then request the next message */
|
||||
ssif_info->waiting_alert = true;
|
||||
ssif_info->retries_left = SSIF_RECV_RETRIES;
|
||||
if (!ssif_info->stopping)
|
||||
|
@ -25,6 +25,10 @@
|
||||
#include <asm/io.h>
|
||||
#include <asm/time.h>
|
||||
|
||||
static void *suspend_resume_cb_data;
|
||||
|
||||
static void (*suspend_resume_callback)(void *data, bool suspend);
|
||||
|
||||
/*
|
||||
* The I/O port the PMTMR resides at.
|
||||
* The location is detected during setup_arch(),
|
||||
@ -58,6 +62,32 @@ u32 acpi_pm_read_verified(void)
|
||||
return v2;
|
||||
}
|
||||
|
||||
void acpi_pmtmr_register_suspend_resume_callback(void (*cb)(void *data, bool suspend), void *data)
|
||||
{
|
||||
suspend_resume_callback = cb;
|
||||
suspend_resume_cb_data = data;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_pmtmr_register_suspend_resume_callback);
|
||||
|
||||
void acpi_pmtmr_unregister_suspend_resume_callback(void)
|
||||
{
|
||||
suspend_resume_callback = NULL;
|
||||
suspend_resume_cb_data = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_pmtmr_unregister_suspend_resume_callback);
|
||||
|
||||
static void acpi_pm_suspend(struct clocksource *cs)
|
||||
{
|
||||
if (suspend_resume_callback)
|
||||
suspend_resume_callback(suspend_resume_cb_data, true);
|
||||
}
|
||||
|
||||
static void acpi_pm_resume(struct clocksource *cs)
|
||||
{
|
||||
if (suspend_resume_callback)
|
||||
suspend_resume_callback(suspend_resume_cb_data, false);
|
||||
}
|
||||
|
||||
static u64 acpi_pm_read(struct clocksource *cs)
|
||||
{
|
||||
return (u64)read_pmtmr();
|
||||
@ -69,6 +99,8 @@ static struct clocksource clocksource_acpi_pm = {
|
||||
.read = acpi_pm_read,
|
||||
.mask = (u64)ACPI_PM_MASK,
|
||||
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
|
||||
.suspend = acpi_pm_suspend,
|
||||
.resume = acpi_pm_resume,
|
||||
};
|
||||
|
||||
|
||||
|
@ -1594,7 +1594,6 @@ static int __init arch_timer_mem_of_init(struct device_node *np)
|
||||
{
|
||||
struct arch_timer_mem *timer_mem;
|
||||
struct arch_timer_mem_frame *frame;
|
||||
struct device_node *frame_node;
|
||||
struct resource res;
|
||||
int ret = -EINVAL;
|
||||
u32 rate;
|
||||
@ -1608,33 +1607,29 @@ static int __init arch_timer_mem_of_init(struct device_node *np)
|
||||
timer_mem->cntctlbase = res.start;
|
||||
timer_mem->size = resource_size(&res);
|
||||
|
||||
for_each_available_child_of_node(np, frame_node) {
|
||||
for_each_available_child_of_node_scoped(np, frame_node) {
|
||||
u32 n;
|
||||
struct arch_timer_mem_frame *frame;
|
||||
|
||||
if (of_property_read_u32(frame_node, "frame-number", &n)) {
|
||||
pr_err(FW_BUG "Missing frame-number.\n");
|
||||
of_node_put(frame_node);
|
||||
goto out;
|
||||
}
|
||||
if (n >= ARCH_TIMER_MEM_MAX_FRAMES) {
|
||||
pr_err(FW_BUG "Wrong frame-number, only 0-%u are permitted.\n",
|
||||
ARCH_TIMER_MEM_MAX_FRAMES - 1);
|
||||
of_node_put(frame_node);
|
||||
goto out;
|
||||
}
|
||||
frame = &timer_mem->frame[n];
|
||||
|
||||
if (frame->valid) {
|
||||
pr_err(FW_BUG "Duplicated frame-number.\n");
|
||||
of_node_put(frame_node);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (of_address_to_resource(frame_node, 0, &res)) {
|
||||
of_node_put(frame_node);
|
||||
if (of_address_to_resource(frame_node, 0, &res))
|
||||
goto out;
|
||||
}
|
||||
|
||||
frame->cntbase = res.start;
|
||||
frame->size = resource_size(&res);
|
||||
|
||||
|
@ -210,6 +210,7 @@ static int __init asm9260_timer_init(struct device_node *np)
|
||||
DRIVER_NAME, &event_dev);
|
||||
if (ret) {
|
||||
pr_err("Failed to setup irq!\n");
|
||||
clk_disable_unprepare(clk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -93,14 +93,10 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
|
||||
return PTR_ERR(map);
|
||||
}
|
||||
|
||||
ost->clk = devm_clk_get(dev, "ost");
|
||||
ost->clk = devm_clk_get_enabled(dev, "ost");
|
||||
if (IS_ERR(ost->clk))
|
||||
return PTR_ERR(ost->clk);
|
||||
|
||||
err = clk_prepare_enable(ost->clk);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Clear counter high/low registers */
|
||||
if (soc_info->is64bit)
|
||||
regmap_write(map, TCU_REG_OST_CNTL, 0);
|
||||
@ -129,7 +125,6 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
|
||||
err = clocksource_register_hz(cs, rate);
|
||||
if (err) {
|
||||
dev_err(dev, "clocksource registration failed");
|
||||
clk_disable_unprepare(ost->clk);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -120,7 +120,7 @@ static int jcore_pit_local_init(unsigned cpu)
|
||||
|
||||
static irqreturn_t jcore_timer_interrupt(int irq, void *dev_id)
|
||||
{
|
||||
struct jcore_pit *pit = this_cpu_ptr(dev_id);
|
||||
struct jcore_pit *pit = dev_id;
|
||||
|
||||
if (clockevent_state_oneshot(&pit->ced))
|
||||
jcore_pit_disable(pit);
|
||||
@ -168,9 +168,8 @@ static int __init jcore_pit_init(struct device_node *node)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
err = request_irq(pit_irq, jcore_timer_interrupt,
|
||||
IRQF_TIMER | IRQF_PERCPU,
|
||||
"jcore_pit", jcore_pit_percpu);
|
||||
err = request_percpu_irq(pit_irq, jcore_timer_interrupt,
|
||||
"jcore_pit", jcore_pit_percpu);
|
||||
if (err) {
|
||||
pr_err("pit irq request failed: %d\n", err);
|
||||
free_percpu(jcore_pit_percpu);
|
||||
|
@ -435,7 +435,7 @@ static int __init ttc_setup_clockevent(struct clk *clk,
|
||||
&ttcce->ttc.clk_rate_change_nb);
|
||||
if (err) {
|
||||
pr_warn("Unable to register clock notifier.\n");
|
||||
goto out_kfree;
|
||||
goto out_clk_unprepare;
|
||||
}
|
||||
|
||||
ttcce->ttc.freq = clk_get_rate(ttcce->ttc.clk);
|
||||
@ -465,13 +465,15 @@ static int __init ttc_setup_clockevent(struct clk *clk,
|
||||
err = request_irq(irq, ttc_clock_event_interrupt,
|
||||
IRQF_TIMER, ttcce->ce.name, ttcce);
|
||||
if (err)
|
||||
goto out_kfree;
|
||||
goto out_clk_unprepare;
|
||||
|
||||
clockevents_config_and_register(&ttcce->ce,
|
||||
ttcce->ttc.freq / PRESCALE, 1, 0xfffe);
|
||||
|
||||
return 0;
|
||||
|
||||
out_clk_unprepare:
|
||||
clk_disable_unprepare(ttcce->ttc.clk);
|
||||
out_kfree:
|
||||
kfree(ttcce);
|
||||
return err;
|
||||
|
@ -233,6 +233,7 @@ static int __init msm_dt_timer_init(struct device_node *np)
|
||||
}
|
||||
|
||||
if (of_property_read_u32(np, "clock-frequency", &freq)) {
|
||||
iounmap(cpu0_base);
|
||||
pr_err("Unknown frequency\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -243,7 +244,11 @@ static int __init msm_dt_timer_init(struct device_node *np)
|
||||
freq /= 4;
|
||||
writel_relaxed(DGT_CLK_CTL_DIV_4, source_base + DGT_CLK_CTL);
|
||||
|
||||
return msm_timer_init(freq, 32, irq, !!percpu_offset);
|
||||
ret = msm_timer_init(freq, 32, irq, !!percpu_offset);
|
||||
if (ret)
|
||||
iounmap(cpu0_base);
|
||||
|
||||
return ret;
|
||||
}
|
||||
TIMER_OF_DECLARE(kpss_timer, "qcom,kpss-timer", msm_dt_timer_init);
|
||||
TIMER_OF_DECLARE(scss_timer, "qcom,scss-timer", msm_dt_timer_init);
|
||||
|
@ -402,7 +402,7 @@ static int test_wait_timeout(void *arg)
|
||||
|
||||
if (dma_fence_wait_timeout(wt.f, false, 2) == -ETIME) {
|
||||
if (timer_pending(&wt.timer)) {
|
||||
pr_notice("Timer did not fire within the jiffie!\n");
|
||||
pr_notice("Timer did not fire within the jiffy!\n");
|
||||
err = 0; /* not our fault! */
|
||||
} else {
|
||||
pr_err("Wait reported incomplete after timeout\n");
|
||||
|
@ -266,7 +266,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
|
||||
if (ret == -ETIME && !nsecs_to_jiffies(args->timeout_ns))
|
||||
args->timeout_ns = 0;
|
||||
|
||||
/* Asked to wait beyond the jiffie/scheduler precision? */
|
||||
/* Asked to wait beyond the jiffy/scheduler precision? */
|
||||
if (ret == -ETIME && args->timeout_ns)
|
||||
ret = -EAGAIN;
|
||||
}
|
||||
|
@ -93,7 +93,7 @@ static int wait_for_reset(struct intel_engine_cs *engine,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Give the request a jiffie to complete after flushing the worker */
|
||||
/* Give the request a jiffy to complete after flushing the worker */
|
||||
if (i915_request_wait(rq, 0,
|
||||
max(0l, (long)(timeout - jiffies)) + 1) < 0) {
|
||||
pr_err("%s: hanging request %llx:%lld did not complete\n",
|
||||
@ -3426,7 +3426,7 @@ static int live_preempt_timeout(void *arg)
|
||||
cpu_relax();
|
||||
|
||||
saved_timeout = engine->props.preempt_timeout_ms;
|
||||
engine->props.preempt_timeout_ms = 1; /* in ms, -> 1 jiffie */
|
||||
engine->props.preempt_timeout_ms = 1; /* in ms, -> 1 jiffy */
|
||||
|
||||
i915_request_get(rq);
|
||||
i915_request_add(rq);
|
||||
|
@ -110,7 +110,7 @@ void set_timer_ms(struct timer_list *t, unsigned long timeout)
|
||||
* Paranoia to make sure the compiler computes the timeout before
|
||||
* loading 'jiffies' as jiffies is volatile and may be updated in
|
||||
* the background by a timer tick. All to reduce the complexity
|
||||
* of the addition and reduce the risk of losing a jiffie.
|
||||
* of the addition and reduce the risk of losing a jiffy.
|
||||
*/
|
||||
barrier();
|
||||
|
||||
|
@ -279,7 +279,7 @@ v3d_wait_bo_ioctl(struct drm_device *dev, void *data,
|
||||
else
|
||||
args->timeout_ns = 0;
|
||||
|
||||
/* Asked to wait beyond the jiffie/scheduler precision? */
|
||||
/* Asked to wait beyond the jiffy/scheduler precision? */
|
||||
if (ret == -ETIME && args->timeout_ns)
|
||||
ret = -EAGAIN;
|
||||
|
||||
|
@ -82,7 +82,7 @@
|
||||
* - has multiple clocks.
|
||||
* - has no usable clock due to jitter or packet loss (VoIP).
|
||||
* In this case the system's clock is used. The clock resolution depends on
|
||||
* the jiffie resolution.
|
||||
* the jiffy resolution.
|
||||
*
|
||||
* If a member joins a conference:
|
||||
*
|
||||
|
@ -104,7 +104,7 @@ static int orion_mdio_wait_ready(const struct orion_mdio_ops *ops,
|
||||
return 0;
|
||||
} else {
|
||||
/* wait_event_timeout does not guarantee a delay of at
|
||||
* least one whole jiffie, so timeout must be no less
|
||||
* least one whole jiffy, so timeout must be no less
|
||||
* than two.
|
||||
*/
|
||||
timeout = max(usecs_to_jiffies(MVMDIO_SMI_TIMEOUT), 2);
|
||||
|
@ -295,6 +295,8 @@ const struct pmc_reg_map adl_reg_map = {
|
||||
.ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED,
|
||||
.lpm_num_modes = ADL_LPM_NUM_MODES,
|
||||
.lpm_num_maps = ADL_LPM_NUM_MAPS,
|
||||
|
@ -200,6 +200,8 @@ const struct pmc_reg_map cnp_reg_map = {
|
||||
.ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.ltr_ignore_max = CNP_NUM_IP_IGN_ALLOWED,
|
||||
.etr3_offset = ETR3_OFFSET,
|
||||
};
|
||||
|
@ -11,6 +11,7 @@
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/acpi_pmtmr.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/delay.h>
|
||||
@ -1208,6 +1209,38 @@ static bool pmc_core_is_pson_residency_enabled(struct pmc_dev *pmcdev)
|
||||
return val == 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable or disable ACPI PM Timer
|
||||
*
|
||||
* This function is intended to be a callback for ACPI PM suspend/resume event.
|
||||
* The ACPI PM Timer is enabled on resume only if it was enabled during suspend.
|
||||
*/
|
||||
static void pmc_core_acpi_pm_timer_suspend_resume(void *data, bool suspend)
|
||||
{
|
||||
struct pmc_dev *pmcdev = data;
|
||||
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
|
||||
const struct pmc_reg_map *map = pmc->map;
|
||||
bool enabled;
|
||||
u32 reg;
|
||||
|
||||
if (!map->acpi_pm_tmr_ctl_offset)
|
||||
return;
|
||||
|
||||
guard(mutex)(&pmcdev->lock);
|
||||
|
||||
if (!suspend && !pmcdev->enable_acpi_pm_timer_on_resume)
|
||||
return;
|
||||
|
||||
reg = pmc_core_reg_read(pmc, map->acpi_pm_tmr_ctl_offset);
|
||||
enabled = !(reg & map->acpi_pm_tmr_disable_bit);
|
||||
if (suspend)
|
||||
reg |= map->acpi_pm_tmr_disable_bit;
|
||||
else
|
||||
reg &= ~map->acpi_pm_tmr_disable_bit;
|
||||
pmc_core_reg_write(pmc, map->acpi_pm_tmr_ctl_offset, reg);
|
||||
|
||||
pmcdev->enable_acpi_pm_timer_on_resume = suspend && enabled;
|
||||
}
|
||||
|
||||
static void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev)
|
||||
{
|
||||
@ -1404,6 +1437,7 @@ static int pmc_core_probe(struct platform_device *pdev)
|
||||
struct pmc_dev *pmcdev;
|
||||
const struct x86_cpu_id *cpu_id;
|
||||
int (*core_init)(struct pmc_dev *pmcdev);
|
||||
const struct pmc_reg_map *map;
|
||||
struct pmc *primary_pmc;
|
||||
int ret;
|
||||
|
||||
@ -1462,6 +1496,11 @@ static int pmc_core_probe(struct platform_device *pdev)
|
||||
pm_report_max_hw_sleep(FIELD_MAX(SLP_S0_RES_COUNTER_MASK) *
|
||||
pmc_core_adjust_slp_s0_step(primary_pmc, 1));
|
||||
|
||||
map = primary_pmc->map;
|
||||
if (map->acpi_pm_tmr_ctl_offset)
|
||||
acpi_pmtmr_register_suspend_resume_callback(pmc_core_acpi_pm_timer_suspend_resume,
|
||||
pmcdev);
|
||||
|
||||
device_initialized = true;
|
||||
dev_info(&pdev->dev, " initialized\n");
|
||||
|
||||
@ -1471,6 +1510,12 @@ static int pmc_core_probe(struct platform_device *pdev)
|
||||
static void pmc_core_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct pmc_dev *pmcdev = platform_get_drvdata(pdev);
|
||||
const struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
|
||||
const struct pmc_reg_map *map = pmc->map;
|
||||
|
||||
if (map->acpi_pm_tmr_ctl_offset)
|
||||
acpi_pmtmr_unregister_suspend_resume_callback();
|
||||
|
||||
pmc_core_dbgfs_unregister(pmcdev);
|
||||
pmc_core_clean_structure(pdev);
|
||||
}
|
||||
|
@ -68,6 +68,8 @@ struct telem_endpoint;
|
||||
#define SPT_PMC_LTR_SCC 0x3A0
|
||||
#define SPT_PMC_LTR_ISH 0x3A4
|
||||
|
||||
#define SPT_PMC_ACPI_PM_TMR_CTL_OFFSET 0x18FC
|
||||
|
||||
/* Sunrise Point: PGD PFET Enable Ack Status Registers */
|
||||
enum ppfear_regs {
|
||||
SPT_PMC_XRAM_PPFEAR0A = 0x590,
|
||||
@ -148,6 +150,8 @@ enum ppfear_regs {
|
||||
#define SPT_PMC_VRIC1_SLPS0LVEN BIT(13)
|
||||
#define SPT_PMC_VRIC1_XTALSDQDIS BIT(22)
|
||||
|
||||
#define SPT_PMC_BIT_ACPI_PM_TMR_DISABLE BIT(1)
|
||||
|
||||
/* Cannonlake Power Management Controller register offsets */
|
||||
#define CNP_PMC_SLPS0_DBG_OFFSET 0x10B4
|
||||
#define CNP_PMC_PM_CFG_OFFSET 0x1818
|
||||
@ -351,6 +355,8 @@ struct pmc_reg_map {
|
||||
const u8 *lpm_reg_index;
|
||||
const u32 pson_residency_offset;
|
||||
const u32 pson_residency_counter_step;
|
||||
const u32 acpi_pm_tmr_ctl_offset;
|
||||
const u32 acpi_pm_tmr_disable_bit;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -424,6 +430,8 @@ struct pmc_dev {
|
||||
u32 die_c6_offset;
|
||||
struct telem_endpoint *punit_ep;
|
||||
struct pmc_info *regmap_list;
|
||||
|
||||
bool enable_acpi_pm_timer_on_resume;
|
||||
};
|
||||
|
||||
enum pmc_index {
|
||||
|
@ -46,6 +46,8 @@ const struct pmc_reg_map icl_reg_map = {
|
||||
.ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.ltr_ignore_max = ICL_NUM_IP_IGN_ALLOWED,
|
||||
.etr3_offset = ETR3_OFFSET,
|
||||
};
|
||||
|
@ -462,6 +462,8 @@ const struct pmc_reg_map mtl_socm_reg_map = {
|
||||
.ppfear_buckets = MTL_SOCM_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.lpm_num_maps = ADL_LPM_NUM_MAPS,
|
||||
.ltr_ignore_max = MTL_SOCM_NUM_IP_IGN_ALLOWED,
|
||||
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
|
||||
|
@ -130,6 +130,8 @@ const struct pmc_reg_map spt_reg_map = {
|
||||
.ppfear_buckets = SPT_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = SPT_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = SPT_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.ltr_ignore_max = SPT_NUM_IP_IGN_ALLOWED,
|
||||
.pm_vric1_offset = SPT_PMC_VRIC1_OFFSET,
|
||||
};
|
||||
|
@ -197,6 +197,8 @@ const struct pmc_reg_map tgl_reg_map = {
|
||||
.ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES,
|
||||
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
|
||||
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
|
||||
.acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET,
|
||||
.acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE,
|
||||
.ltr_ignore_max = TGL_NUM_IP_IGN_ALLOWED,
|
||||
.lpm_num_maps = TGL_LPM_NUM_MAPS,
|
||||
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
|
||||
|
@ -2513,13 +2513,13 @@ static void *timers_start(struct seq_file *m, loff_t *pos)
|
||||
if (!tp->sighand)
|
||||
return ERR_PTR(-ESRCH);
|
||||
|
||||
return seq_list_start(&tp->task->signal->posix_timers, *pos);
|
||||
return seq_hlist_start(&tp->task->signal->posix_timers, *pos);
|
||||
}
|
||||
|
||||
static void *timers_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
{
|
||||
struct timers_private *tp = m->private;
|
||||
return seq_list_next(v, &tp->task->signal->posix_timers, pos);
|
||||
return seq_hlist_next(v, &tp->task->signal->posix_timers, pos);
|
||||
}
|
||||
|
||||
static void timers_stop(struct seq_file *m, void *v)
|
||||
@ -2548,7 +2548,7 @@ static int show_timer(struct seq_file *m, void *v)
|
||||
[SIGEV_THREAD] = "thread",
|
||||
};
|
||||
|
||||
timer = list_entry((struct list_head *)v, struct k_itimer, list);
|
||||
timer = hlist_entry((struct hlist_node *)v, struct k_itimer, list);
|
||||
notify = timer->it_sigev_notify;
|
||||
|
||||
seq_printf(m, "ID: %d\n", timer->it_id);
|
||||
@ -2626,10 +2626,11 @@ static ssize_t timerslack_ns_write(struct file *file, const char __user *buf,
|
||||
}
|
||||
|
||||
task_lock(p);
|
||||
if (slack_ns == 0)
|
||||
p->timer_slack_ns = p->default_timer_slack_ns;
|
||||
else
|
||||
p->timer_slack_ns = slack_ns;
|
||||
if (task_is_realtime(p))
|
||||
slack_ns = 0;
|
||||
else if (slack_ns == 0)
|
||||
slack_ns = p->default_timer_slack_ns;
|
||||
p->timer_slack_ns = slack_ns;
|
||||
task_unlock(p);
|
||||
|
||||
out:
|
||||
|
11
fs/select.c
11
fs/select.c
@ -77,19 +77,16 @@ u64 select_estimate_accuracy(struct timespec64 *tv)
|
||||
{
|
||||
u64 ret;
|
||||
struct timespec64 now;
|
||||
u64 slack = current->timer_slack_ns;
|
||||
|
||||
/*
|
||||
* Realtime tasks get a slack of 0 for obvious reasons.
|
||||
*/
|
||||
|
||||
if (rt_task(current))
|
||||
if (slack == 0)
|
||||
return 0;
|
||||
|
||||
ktime_get_ts64(&now);
|
||||
now = timespec64_sub(*tv, now);
|
||||
ret = __estimate_accuracy(&now);
|
||||
if (ret < current->timer_slack_ns)
|
||||
return current->timer_slack_ns;
|
||||
if (ret < slack)
|
||||
return slack;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -159,7 +159,7 @@ static ssize_t signalfd_dequeue(struct signalfd_ctx *ctx, kernel_siginfo_t *info
|
||||
DECLARE_WAITQUEUE(wait, current);
|
||||
|
||||
spin_lock_irq(¤t->sighand->siglock);
|
||||
ret = dequeue_signal(current, &ctx->sigmask, info, &type);
|
||||
ret = dequeue_signal(&ctx->sigmask, info, &type);
|
||||
switch (ret) {
|
||||
case 0:
|
||||
if (!nonblock)
|
||||
@ -174,7 +174,7 @@ static ssize_t signalfd_dequeue(struct signalfd_ctx *ctx, kernel_siginfo_t *info
|
||||
add_wait_queue(¤t->sighand->signalfd_wqh, &wait);
|
||||
for (;;) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
ret = dequeue_signal(current, &ctx->sigmask, info, &type);
|
||||
ret = dequeue_signal(&ctx->sigmask, info, &type);
|
||||
if (ret != 0)
|
||||
break;
|
||||
if (signal_pending(current)) {
|
||||
|
@ -210,7 +210,7 @@ struct xfs_buf {
|
||||
* success the write is considered to be failed permanently and the
|
||||
* iodone handler will take appropriate action.
|
||||
*
|
||||
* For retry timeouts, we record the jiffie of the first failure. This
|
||||
* For retry timeouts, we record the jiffy of the first failure. This
|
||||
* means that we can change the retry timeout for buffers already under
|
||||
* I/O and thus avoid getting stuck in a retry loop with a long timeout.
|
||||
*
|
||||
|
@ -26,6 +26,19 @@ static inline u32 acpi_pm_read_early(void)
|
||||
return acpi_pm_read_verified() & ACPI_PM_MASK;
|
||||
}
|
||||
|
||||
/**
|
||||
* Register callback for suspend and resume event
|
||||
*
|
||||
* @cb Callback triggered on suspend and resume
|
||||
* @data Data passed with the callback
|
||||
*/
|
||||
void acpi_pmtmr_register_suspend_resume_callback(void (*cb)(void *data, bool suspend), void *data);
|
||||
|
||||
/**
|
||||
* Remove registered callback for suspend and resume event
|
||||
*/
|
||||
void acpi_pmtmr_unregister_suspend_resume_callback(void);
|
||||
|
||||
#else
|
||||
|
||||
static inline u32 acpi_pm_read_early(void)
|
||||
|
@ -418,7 +418,7 @@ extern unsigned long preset_lpj;
|
||||
#define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\
|
||||
TICK_NSEC -1) / (u64)TICK_NSEC))
|
||||
/*
|
||||
* The maximum jiffie value is (MAX_INT >> 1). Here we translate that
|
||||
* The maximum jiffy value is (MAX_INT >> 1). Here we translate that
|
||||
* into seconds. The 64-bit case will overflow if we are not careful,
|
||||
* so use the messy SH_DIV macro to do it. Still all constants.
|
||||
*/
|
||||
|
@ -158,7 +158,7 @@ static inline void posix_cputimers_init_work(void) { }
|
||||
* @rcu: RCU head for freeing the timer.
|
||||
*/
|
||||
struct k_itimer {
|
||||
struct list_head list;
|
||||
struct hlist_node list;
|
||||
struct hlist_node t_hash;
|
||||
spinlock_t it_lock;
|
||||
const struct k_clock *kclock;
|
||||
|
@ -137,7 +137,7 @@ struct signal_struct {
|
||||
|
||||
/* POSIX.1b Interval Timers */
|
||||
unsigned int next_posix_timer_id;
|
||||
struct list_head posix_timers;
|
||||
struct hlist_head posix_timers;
|
||||
|
||||
/* ITIMER_REAL timer for the process */
|
||||
struct hrtimer real_timer;
|
||||
@ -276,8 +276,7 @@ static inline void signal_set_stop_flags(struct signal_struct *sig,
|
||||
extern void flush_signals(struct task_struct *);
|
||||
extern void ignore_signals(struct task_struct *);
|
||||
extern void flush_signal_handlers(struct task_struct *, int force_default);
|
||||
extern int dequeue_signal(struct task_struct *task, sigset_t *mask,
|
||||
kernel_siginfo_t *info, enum pid_type *type);
|
||||
extern int dequeue_signal(sigset_t *mask, kernel_siginfo_t *info, enum pid_type *type);
|
||||
|
||||
static inline int kernel_dequeue_signal(void)
|
||||
{
|
||||
@ -287,7 +286,7 @@ static inline int kernel_dequeue_signal(void)
|
||||
int ret;
|
||||
|
||||
spin_lock_irq(&task->sighand->siglock);
|
||||
ret = dequeue_signal(task, &task->blocked, &__info, &__type);
|
||||
ret = dequeue_signal(&task->blocked, &__info, &__type);
|
||||
spin_unlock_irq(&task->sighand->siglock);
|
||||
|
||||
return ret;
|
||||
|
@ -73,7 +73,7 @@ struct tk_read_base {
|
||||
* @overflow_seen: Overflow warning flag (DEBUG_TIMEKEEPING)
|
||||
*
|
||||
* Note: For timespec(64) based interfaces wall_to_monotonic is what
|
||||
* we need to add to xtime (or xtime corrected for sub jiffie times)
|
||||
* we need to add to xtime (or xtime corrected for sub jiffy times)
|
||||
* to get to monotonic time. Monotonic is pegged at zero at system
|
||||
* boot time, so wall_to_monotonic will be negative, however, we will
|
||||
* ALWAYS keep the tv_nsec part positive so we can use the usual
|
||||
|
@ -29,7 +29,7 @@ static struct signal_struct init_signals = {
|
||||
.cred_guard_mutex = __MUTEX_INITIALIZER(init_signals.cred_guard_mutex),
|
||||
.exec_update_lock = __RWSEM_INITIALIZER(init_signals.exec_update_lock),
|
||||
#ifdef CONFIG_POSIX_TIMERS
|
||||
.posix_timers = LIST_HEAD_INIT(init_signals.posix_timers),
|
||||
.posix_timers = HLIST_HEAD_INIT,
|
||||
.cputimer = {
|
||||
.cputime_atomic = INIT_CPUTIME_ATOMIC,
|
||||
},
|
||||
|
@ -330,7 +330,7 @@ static bool cpuhp_wait_for_sync_state(unsigned int cpu, enum cpuhp_sync_state st
|
||||
/* Poll for one millisecond */
|
||||
arch_cpuhp_sync_state_poll();
|
||||
} else {
|
||||
usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE);
|
||||
usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
|
||||
}
|
||||
sync = atomic_read(st);
|
||||
}
|
||||
|
@ -1861,7 +1861,7 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
|
||||
prev_cputime_init(&sig->prev_cputime);
|
||||
|
||||
#ifdef CONFIG_POSIX_TIMERS
|
||||
INIT_LIST_HEAD(&sig->posix_timers);
|
||||
INIT_HLIST_HEAD(&sig->posix_timers);
|
||||
hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
|
||||
sig->real_timer.function = it_real_fn;
|
||||
#endif
|
||||
|
@ -406,6 +406,14 @@ static void __setscheduler_params(struct task_struct *p,
|
||||
else if (fair_policy(policy))
|
||||
p->static_prio = NICE_TO_PRIO(attr->sched_nice);
|
||||
|
||||
/* rt-policy tasks do not have a timerslack */
|
||||
if (task_is_realtime(p)) {
|
||||
p->timer_slack_ns = 0;
|
||||
} else if (p->timer_slack_ns == 0) {
|
||||
/* when switching back to non-rt policy, restore timerslack */
|
||||
p->timer_slack_ns = p->default_timer_slack_ns;
|
||||
}
|
||||
|
||||
/*
|
||||
* __sched_setscheduler() ensures attr->sched_priority == 0 when
|
||||
* !rt_policy. Always setting this ensures that things like
|
||||
|
@ -618,20 +618,18 @@ static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
|
||||
}
|
||||
|
||||
/*
|
||||
* Dequeue a signal and return the element to the caller, which is
|
||||
* expected to free it.
|
||||
*
|
||||
* All callers have to hold the siglock.
|
||||
* Try to dequeue a signal. If a deliverable signal is found fill in the
|
||||
* caller provided siginfo and return the signal number. Otherwise return
|
||||
* 0.
|
||||
*/
|
||||
int dequeue_signal(struct task_struct *tsk, sigset_t *mask,
|
||||
kernel_siginfo_t *info, enum pid_type *type)
|
||||
int dequeue_signal(sigset_t *mask, kernel_siginfo_t *info, enum pid_type *type)
|
||||
{
|
||||
struct task_struct *tsk = current;
|
||||
bool resched_timer = false;
|
||||
int signr;
|
||||
|
||||
/* We only dequeue private signals from ourselves, we don't let
|
||||
* signalfd steal them
|
||||
*/
|
||||
lockdep_assert_held(&tsk->sighand->siglock);
|
||||
|
||||
*type = PIDTYPE_PID;
|
||||
signr = __dequeue_signal(&tsk->pending, mask, info, &resched_timer);
|
||||
if (!signr) {
|
||||
@ -1940,10 +1938,11 @@ struct sigqueue *sigqueue_alloc(void)
|
||||
|
||||
void sigqueue_free(struct sigqueue *q)
|
||||
{
|
||||
unsigned long flags;
|
||||
spinlock_t *lock = ¤t->sighand->siglock;
|
||||
unsigned long flags;
|
||||
|
||||
BUG_ON(!(q->flags & SIGQUEUE_PREALLOC));
|
||||
if (WARN_ON_ONCE(!(q->flags & SIGQUEUE_PREALLOC)))
|
||||
return;
|
||||
/*
|
||||
* We must hold ->siglock while testing q->list
|
||||
* to serialize with collect_signal() or with
|
||||
@ -1971,7 +1970,10 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
|
||||
unsigned long flags;
|
||||
int ret, result;
|
||||
|
||||
BUG_ON(!(q->flags & SIGQUEUE_PREALLOC));
|
||||
if (WARN_ON_ONCE(!(q->flags & SIGQUEUE_PREALLOC)))
|
||||
return 0;
|
||||
if (WARN_ON_ONCE(q->info.si_code != SI_TIMER))
|
||||
return 0;
|
||||
|
||||
ret = -1;
|
||||
rcu_read_lock();
|
||||
@ -2006,7 +2008,6 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
|
||||
* If an SI_TIMER entry is already queue just increment
|
||||
* the overrun count.
|
||||
*/
|
||||
BUG_ON(q->info.si_code != SI_TIMER);
|
||||
q->info.si_overrun++;
|
||||
result = TRACE_SIGNAL_ALREADY_PENDING;
|
||||
goto out;
|
||||
@ -2793,8 +2794,7 @@ relock:
|
||||
type = PIDTYPE_PID;
|
||||
signr = dequeue_synchronous_signal(&ksig->info);
|
||||
if (!signr)
|
||||
signr = dequeue_signal(current, ¤t->blocked,
|
||||
&ksig->info, &type);
|
||||
signr = dequeue_signal(¤t->blocked, &ksig->info, &type);
|
||||
|
||||
if (!signr)
|
||||
break; /* will return 0 */
|
||||
@ -3648,7 +3648,7 @@ static int do_sigtimedwait(const sigset_t *which, kernel_siginfo_t *info,
|
||||
signotset(&mask);
|
||||
|
||||
spin_lock_irq(&tsk->sighand->siglock);
|
||||
sig = dequeue_signal(tsk, &mask, info, &type);
|
||||
sig = dequeue_signal(&mask, info, &type);
|
||||
if (!sig && timeout) {
|
||||
/*
|
||||
* None ready, temporarily unblock those we're interested
|
||||
@ -3667,7 +3667,7 @@ static int do_sigtimedwait(const sigset_t *which, kernel_siginfo_t *info,
|
||||
spin_lock_irq(&tsk->sighand->siglock);
|
||||
__set_task_blocked(tsk, &tsk->real_blocked);
|
||||
sigemptyset(&tsk->real_blocked);
|
||||
sig = dequeue_signal(tsk, &mask, info, &type);
|
||||
sig = dequeue_signal(&mask, info, &type);
|
||||
}
|
||||
spin_unlock_irq(&tsk->sighand->siglock);
|
||||
|
||||
|
@ -2557,6 +2557,8 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
|
||||
error = current->timer_slack_ns;
|
||||
break;
|
||||
case PR_SET_TIMERSLACK:
|
||||
if (task_is_realtime(current))
|
||||
break;
|
||||
if (arg2 <= 0)
|
||||
current->timer_slack_ns =
|
||||
current->default_timer_slack_ns;
|
||||
|
@ -493,7 +493,7 @@ static u64 __alarm_forward_now(struct alarm *alarm, ktime_t interval, bool throt
|
||||
* promised in the context of posix_timer_fn() never
|
||||
* materialized, but someone should really work on it.
|
||||
*
|
||||
* To prevent DOS fake @now to be 1 jiffie out which keeps
|
||||
* To prevent DOS fake @now to be 1 jiffy out which keeps
|
||||
* the overrun accounting correct but creates an
|
||||
* inconsistency vs. timer_gettime(2).
|
||||
*/
|
||||
@ -574,15 +574,10 @@ static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
|
||||
it.alarm.alarmtimer);
|
||||
enum alarmtimer_restart result = ALARMTIMER_NORESTART;
|
||||
unsigned long flags;
|
||||
int si_private = 0;
|
||||
|
||||
spin_lock_irqsave(&ptr->it_lock, flags);
|
||||
|
||||
ptr->it_active = 0;
|
||||
if (ptr->it_interval)
|
||||
si_private = ++ptr->it_requeue_pending;
|
||||
|
||||
if (posix_timer_event(ptr, si_private) && ptr->it_interval) {
|
||||
if (posix_timer_queue_signal(ptr) && ptr->it_interval) {
|
||||
/*
|
||||
* Handle ignored signals and rearm the timer. This will go
|
||||
* away once we handle ignored signals proper. Ensure that
|
||||
|
@ -190,7 +190,7 @@ int clockevents_tick_resume(struct clock_event_device *dev)
|
||||
|
||||
#ifdef CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST
|
||||
|
||||
/* Limit min_delta to a jiffie */
|
||||
/* Limit min_delta to a jiffy */
|
||||
#define MIN_DELTA_LIMIT (NSEC_PER_SEC / HZ)
|
||||
|
||||
/**
|
||||
|
@ -1177,7 +1177,7 @@ static inline ktime_t hrtimer_update_lowres(struct hrtimer *timer, ktime_t tim,
|
||||
/*
|
||||
* CONFIG_TIME_LOW_RES indicates that the system has no way to return
|
||||
* granular time values. For relative timers we add hrtimer_resolution
|
||||
* (i.e. one jiffie) to prevent short timeouts.
|
||||
* (i.e. one jiffy) to prevent short timeouts.
|
||||
*/
|
||||
timer->is_rel = mode & HRTIMER_MODE_REL;
|
||||
if (timer->is_rel)
|
||||
@ -1351,11 +1351,13 @@ static void hrtimer_cpu_base_init_expiry_lock(struct hrtimer_cpu_base *base)
|
||||
}
|
||||
|
||||
static void hrtimer_cpu_base_lock_expiry(struct hrtimer_cpu_base *base)
|
||||
__acquires(&base->softirq_expiry_lock)
|
||||
{
|
||||
spin_lock(&base->softirq_expiry_lock);
|
||||
}
|
||||
|
||||
static void hrtimer_cpu_base_unlock_expiry(struct hrtimer_cpu_base *base)
|
||||
__releases(&base->softirq_expiry_lock)
|
||||
{
|
||||
spin_unlock(&base->softirq_expiry_lock);
|
||||
}
|
||||
@ -2072,14 +2074,9 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
|
||||
struct restart_block *restart;
|
||||
struct hrtimer_sleeper t;
|
||||
int ret = 0;
|
||||
u64 slack;
|
||||
|
||||
slack = current->timer_slack_ns;
|
||||
if (rt_task(current))
|
||||
slack = 0;
|
||||
|
||||
hrtimer_init_sleeper_on_stack(&t, clockid, mode);
|
||||
hrtimer_set_expires_range_ns(&t.timer, rqtp, slack);
|
||||
hrtimer_set_expires_range_ns(&t.timer, rqtp, current->timer_slack_ns);
|
||||
ret = do_nanosleep(&t, mode);
|
||||
if (ret != -ERESTART_RESTARTBLOCK)
|
||||
goto out;
|
||||
@ -2249,7 +2246,7 @@ void __init hrtimers_init(void)
|
||||
/**
|
||||
* schedule_hrtimeout_range_clock - sleep until timeout
|
||||
* @expires: timeout value (ktime_t)
|
||||
* @delta: slack in expires timeout (ktime_t) for SCHED_OTHER tasks
|
||||
* @delta: slack in expires timeout (ktime_t)
|
||||
* @mode: timer mode
|
||||
* @clock_id: timer clock to be used
|
||||
*/
|
||||
@ -2276,13 +2273,6 @@ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
|
||||
return -EINTR;
|
||||
}
|
||||
|
||||
/*
|
||||
* Override any slack passed by the user if under
|
||||
* rt contraints.
|
||||
*/
|
||||
if (rt_task(current))
|
||||
delta = 0;
|
||||
|
||||
hrtimer_init_sleeper_on_stack(&t, clock_id, mode);
|
||||
hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
|
||||
hrtimer_sleeper_start_expires(&t, mode);
|
||||
@ -2302,7 +2292,7 @@ EXPORT_SYMBOL_GPL(schedule_hrtimeout_range_clock);
|
||||
/**
|
||||
* schedule_hrtimeout_range - sleep until timeout
|
||||
* @expires: timeout value (ktime_t)
|
||||
* @delta: slack in expires timeout (ktime_t) for SCHED_OTHER tasks
|
||||
* @delta: slack in expires timeout (ktime_t)
|
||||
* @mode: timer mode
|
||||
*
|
||||
* Make the current task sleep until the given expiry time has
|
||||
|
@ -660,8 +660,16 @@ rearm:
|
||||
sched_sync_hw_clock(offset_nsec, res != 0);
|
||||
}
|
||||
|
||||
void ntp_notify_cmos_timer(void)
|
||||
void ntp_notify_cmos_timer(bool offset_set)
|
||||
{
|
||||
/*
|
||||
* If the time jumped (using ADJ_SETOFFSET) cancels sync timer,
|
||||
* which may have been running if the time was synchronized
|
||||
* prior to the ADJ_SETOFFSET call.
|
||||
*/
|
||||
if (offset_set)
|
||||
hrtimer_cancel(&sync_hrtimer);
|
||||
|
||||
/*
|
||||
* When the work is currently executed but has not yet the timer
|
||||
* rearmed this queues the work immediately again. No big issue,
|
||||
|
@ -14,9 +14,9 @@ extern int __do_adjtimex(struct __kernel_timex *txc,
|
||||
extern void __hardpps(const struct timespec64 *phase_ts, const struct timespec64 *raw_ts);
|
||||
|
||||
#if defined(CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC)
|
||||
extern void ntp_notify_cmos_timer(void);
|
||||
extern void ntp_notify_cmos_timer(bool offset_set);
|
||||
#else
|
||||
static inline void ntp_notify_cmos_timer(void) { }
|
||||
static inline void ntp_notify_cmos_timer(bool offset_set) { }
|
||||
#endif
|
||||
|
||||
#endif /* _LINUX_NTP_INTERNAL_H */
|
||||
|
@ -453,6 +453,7 @@ static void disarm_timer(struct k_itimer *timer, struct task_struct *p)
|
||||
struct cpu_timer *ctmr = &timer->it.cpu;
|
||||
struct posix_cputimer_base *base;
|
||||
|
||||
timer->it_active = 0;
|
||||
if (!cpu_timer_dequeue(ctmr))
|
||||
return;
|
||||
|
||||
@ -559,6 +560,7 @@ static void arm_timer(struct k_itimer *timer, struct task_struct *p)
|
||||
struct cpu_timer *ctmr = &timer->it.cpu;
|
||||
u64 newexp = cpu_timer_getexpires(ctmr);
|
||||
|
||||
timer->it_active = 1;
|
||||
if (!cpu_timer_enqueue(&base->tqhead, ctmr))
|
||||
return;
|
||||
|
||||
@ -584,12 +586,8 @@ static void cpu_timer_fire(struct k_itimer *timer)
|
||||
{
|
||||
struct cpu_timer *ctmr = &timer->it.cpu;
|
||||
|
||||
if ((timer->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) {
|
||||
/*
|
||||
* User don't want any signal.
|
||||
*/
|
||||
cpu_timer_setexpires(ctmr, 0);
|
||||
} else if (unlikely(timer->sigq == NULL)) {
|
||||
timer->it_active = 0;
|
||||
if (unlikely(timer->sigq == NULL)) {
|
||||
/*
|
||||
* This a special case for clock_nanosleep,
|
||||
* not a normal timer from sys_timer_create.
|
||||
@ -600,9 +598,9 @@ static void cpu_timer_fire(struct k_itimer *timer)
|
||||
/*
|
||||
* One-shot timer. Clear it as soon as it's fired.
|
||||
*/
|
||||
posix_timer_event(timer, 0);
|
||||
posix_timer_queue_signal(timer);
|
||||
cpu_timer_setexpires(ctmr, 0);
|
||||
} else if (posix_timer_event(timer, ++timer->it_requeue_pending)) {
|
||||
} else if (posix_timer_queue_signal(timer)) {
|
||||
/*
|
||||
* The signal did not get queued because the signal
|
||||
* was ignored, so we won't get any callback to
|
||||
@ -614,6 +612,8 @@ static void cpu_timer_fire(struct k_itimer *timer)
|
||||
}
|
||||
}
|
||||
|
||||
static void __posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp, u64 now);
|
||||
|
||||
/*
|
||||
* Guts of sys_timer_settime for CPU timers.
|
||||
* This is called with the timer locked and interrupts disabled.
|
||||
@ -623,9 +623,10 @@ static void cpu_timer_fire(struct k_itimer *timer)
|
||||
static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
|
||||
struct itimerspec64 *new, struct itimerspec64 *old)
|
||||
{
|
||||
bool sigev_none = timer->it_sigev_notify == SIGEV_NONE;
|
||||
clockid_t clkid = CPUCLOCK_WHICH(timer->it_clock);
|
||||
u64 old_expires, new_expires, old_incr, val;
|
||||
struct cpu_timer *ctmr = &timer->it.cpu;
|
||||
u64 old_expires, new_expires, now;
|
||||
struct sighand_struct *sighand;
|
||||
struct task_struct *p;
|
||||
unsigned long flags;
|
||||
@ -662,10 +663,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
|
||||
return -ESRCH;
|
||||
}
|
||||
|
||||
/*
|
||||
* Disarm any old timer after extracting its expiry time.
|
||||
*/
|
||||
old_incr = timer->it_interval;
|
||||
/* Retrieve the current expiry time before disarming the timer */
|
||||
old_expires = cpu_timer_getexpires(ctmr);
|
||||
|
||||
if (unlikely(timer->it.cpu.firing)) {
|
||||
@ -673,157 +671,122 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
|
||||
ret = TIMER_RETRY;
|
||||
} else {
|
||||
cpu_timer_dequeue(ctmr);
|
||||
timer->it_active = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* We need to sample the current value to convert the new
|
||||
* value from to relative and absolute, and to convert the
|
||||
* old value from absolute to relative. To set a process
|
||||
* timer, we need a sample to balance the thread expiry
|
||||
* times (in arm_timer). With an absolute time, we must
|
||||
* check if it's already passed. In short, we need a sample.
|
||||
* Sample the current clock for saving the previous setting
|
||||
* and for rearming the timer.
|
||||
*/
|
||||
if (CPUCLOCK_PERTHREAD(timer->it_clock))
|
||||
val = cpu_clock_sample(clkid, p);
|
||||
now = cpu_clock_sample(clkid, p);
|
||||
else
|
||||
val = cpu_clock_sample_group(clkid, p, true);
|
||||
now = cpu_clock_sample_group(clkid, p, !sigev_none);
|
||||
|
||||
/* Retrieve the previous expiry value if requested. */
|
||||
if (old) {
|
||||
if (old_expires == 0) {
|
||||
old->it_value.tv_sec = 0;
|
||||
old->it_value.tv_nsec = 0;
|
||||
} else {
|
||||
/*
|
||||
* Update the timer in case it has overrun already.
|
||||
* If it has, we'll report it as having overrun and
|
||||
* with the next reloaded timer already ticking,
|
||||
* though we are swallowing that pending
|
||||
* notification here to install the new setting.
|
||||
*/
|
||||
u64 exp = bump_cpu_timer(timer, val);
|
||||
|
||||
if (val < exp) {
|
||||
old_expires = exp - val;
|
||||
old->it_value = ns_to_timespec64(old_expires);
|
||||
} else {
|
||||
old->it_value.tv_nsec = 1;
|
||||
old->it_value.tv_sec = 0;
|
||||
}
|
||||
}
|
||||
old->it_value = (struct timespec64){ };
|
||||
if (old_expires)
|
||||
__posix_cpu_timer_get(timer, old, now);
|
||||
}
|
||||
|
||||
/* Retry if the timer expiry is running concurrently */
|
||||
if (unlikely(ret)) {
|
||||
/*
|
||||
* We are colliding with the timer actually firing.
|
||||
* Punt after filling in the timer's old value, and
|
||||
* disable this firing since we are already reporting
|
||||
* it as an overrun (thanks to bump_cpu_timer above).
|
||||
*/
|
||||
unlock_task_sighand(p, &flags);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (new_expires != 0 && !(timer_flags & TIMER_ABSTIME)) {
|
||||
new_expires += val;
|
||||
}
|
||||
/* Convert relative expiry time to absolute */
|
||||
if (new_expires && !(timer_flags & TIMER_ABSTIME))
|
||||
new_expires += now;
|
||||
|
||||
/* Set the new expiry time (might be 0) */
|
||||
cpu_timer_setexpires(ctmr, new_expires);
|
||||
|
||||
/*
|
||||
* Install the new expiry time (or zero).
|
||||
* For a timer with no notification action, we don't actually
|
||||
* arm the timer (we'll just fake it for timer_gettime).
|
||||
* Arm the timer if it is not disabled, the new expiry value has
|
||||
* not yet expired and the timer requires signal delivery.
|
||||
* SIGEV_NONE timers are never armed. In case the timer is not
|
||||
* armed, enforce the reevaluation of the timer base so that the
|
||||
* process wide cputime counter can be disabled eventually.
|
||||
*/
|
||||
cpu_timer_setexpires(ctmr, new_expires);
|
||||
if (new_expires != 0 && val < new_expires) {
|
||||
arm_timer(timer, p);
|
||||
if (likely(!sigev_none)) {
|
||||
if (new_expires && now < new_expires)
|
||||
arm_timer(timer, p);
|
||||
else
|
||||
trigger_base_recalc_expires(timer, p);
|
||||
}
|
||||
|
||||
unlock_task_sighand(p, &flags);
|
||||
/*
|
||||
* Install the new reload setting, and
|
||||
* set up the signal and overrun bookkeeping.
|
||||
*/
|
||||
timer->it_interval = timespec64_to_ktime(new->it_interval);
|
||||
|
||||
posix_timer_set_common(timer, new);
|
||||
|
||||
/*
|
||||
* This acts as a modification timestamp for the timer,
|
||||
* so any automatic reload attempt will punt on seeing
|
||||
* that we have reset the timer manually.
|
||||
* If the new expiry time was already in the past the timer was not
|
||||
* queued. Fire it immediately even if the thread never runs to
|
||||
* accumulate more time on this clock.
|
||||
*/
|
||||
timer->it_requeue_pending = (timer->it_requeue_pending + 2) &
|
||||
~REQUEUE_PENDING;
|
||||
timer->it_overrun_last = 0;
|
||||
timer->it_overrun = -1;
|
||||
|
||||
if (val >= new_expires) {
|
||||
if (new_expires != 0) {
|
||||
/*
|
||||
* The designated time already passed, so we notify
|
||||
* immediately, even if the thread never runs to
|
||||
* accumulate more time on this clock.
|
||||
*/
|
||||
cpu_timer_fire(timer);
|
||||
}
|
||||
|
||||
/*
|
||||
* Make sure we don't keep around the process wide cputime
|
||||
* counter or the tick dependency if they are not necessary.
|
||||
*/
|
||||
sighand = lock_task_sighand(p, &flags);
|
||||
if (!sighand)
|
||||
goto out;
|
||||
|
||||
if (!cpu_timer_queued(ctmr))
|
||||
trigger_base_recalc_expires(timer, p);
|
||||
|
||||
unlock_task_sighand(p, &flags);
|
||||
}
|
||||
out:
|
||||
if (!sigev_none && new_expires && now >= new_expires)
|
||||
cpu_timer_fire(timer);
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
if (old)
|
||||
old->it_interval = ns_to_timespec64(old_incr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp, u64 now)
|
||||
{
|
||||
bool sigev_none = timer->it_sigev_notify == SIGEV_NONE;
|
||||
u64 expires, iv = timer->it_interval;
|
||||
|
||||
/*
|
||||
* Make sure that interval timers are moved forward for the
|
||||
* following cases:
|
||||
* - SIGEV_NONE timers which are never armed
|
||||
* - Timers which expired, but the signal has not yet been
|
||||
* delivered
|
||||
*/
|
||||
if (iv && ((timer->it_requeue_pending & REQUEUE_PENDING) || sigev_none))
|
||||
expires = bump_cpu_timer(timer, now);
|
||||
else
|
||||
expires = cpu_timer_getexpires(&timer->it.cpu);
|
||||
|
||||
/*
|
||||
* Expired interval timers cannot have a remaining time <= 0.
|
||||
* The kernel has to move them forward so that the next
|
||||
* timer expiry is > @now.
|
||||
*/
|
||||
if (now < expires) {
|
||||
itp->it_value = ns_to_timespec64(expires - now);
|
||||
} else {
|
||||
/*
|
||||
* A single shot SIGEV_NONE timer must return 0, when it is
|
||||
* expired! Timers which have a real signal delivery mode
|
||||
* must return a remaining time greater than 0 because the
|
||||
* signal has not yet been delivered.
|
||||
*/
|
||||
if (!sigev_none)
|
||||
itp->it_value.tv_nsec = 1;
|
||||
}
|
||||
}
|
||||
|
||||
static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp)
|
||||
{
|
||||
clockid_t clkid = CPUCLOCK_WHICH(timer->it_clock);
|
||||
struct cpu_timer *ctmr = &timer->it.cpu;
|
||||
u64 now, expires = cpu_timer_getexpires(ctmr);
|
||||
struct task_struct *p;
|
||||
u64 now;
|
||||
|
||||
rcu_read_lock();
|
||||
p = cpu_timer_task_rcu(timer);
|
||||
if (!p)
|
||||
goto out;
|
||||
if (p && cpu_timer_getexpires(&timer->it.cpu)) {
|
||||
itp->it_interval = ktime_to_timespec64(timer->it_interval);
|
||||
|
||||
/*
|
||||
* Easy part: convert the reload time.
|
||||
*/
|
||||
itp->it_interval = ktime_to_timespec64(timer->it_interval);
|
||||
if (CPUCLOCK_PERTHREAD(timer->it_clock))
|
||||
now = cpu_clock_sample(clkid, p);
|
||||
else
|
||||
now = cpu_clock_sample_group(clkid, p, false);
|
||||
|
||||
if (!expires)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Sample the clock to take the difference with the expiry time.
|
||||
*/
|
||||
if (CPUCLOCK_PERTHREAD(timer->it_clock))
|
||||
now = cpu_clock_sample(clkid, p);
|
||||
else
|
||||
now = cpu_clock_sample_group(clkid, p, false);
|
||||
|
||||
if (now < expires) {
|
||||
itp->it_value = ns_to_timespec64(expires - now);
|
||||
} else {
|
||||
/*
|
||||
* The timer should have expired already, but the firing
|
||||
* hasn't taken place yet. Say it's just about to expire.
|
||||
*/
|
||||
itp->it_value.tv_nsec = 1;
|
||||
itp->it_value.tv_sec = 0;
|
||||
__posix_cpu_timer_get(timer, itp, now);
|
||||
}
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
|
@ -277,10 +277,17 @@ void posixtimer_rearm(struct kernel_siginfo *info)
|
||||
unlock_timer(timr, flags);
|
||||
}
|
||||
|
||||
int posix_timer_event(struct k_itimer *timr, int si_private)
|
||||
int posix_timer_queue_signal(struct k_itimer *timr)
|
||||
{
|
||||
int ret, si_private = 0;
|
||||
enum pid_type type;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&timr->it_lock);
|
||||
|
||||
timr->it_active = 0;
|
||||
if (timr->it_interval)
|
||||
si_private = ++timr->it_requeue_pending;
|
||||
|
||||
/*
|
||||
* FIXME: if ->sigq is queued we can race with
|
||||
* dequeue_signal()->posixtimer_rearm().
|
||||
@ -309,19 +316,13 @@ int posix_timer_event(struct k_itimer *timr, int si_private)
|
||||
*/
|
||||
static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
|
||||
{
|
||||
struct k_itimer *timr = container_of(timer, struct k_itimer, it.real.timer);
|
||||
enum hrtimer_restart ret = HRTIMER_NORESTART;
|
||||
struct k_itimer *timr;
|
||||
unsigned long flags;
|
||||
int si_private = 0;
|
||||
|
||||
timr = container_of(timer, struct k_itimer, it.real.timer);
|
||||
spin_lock_irqsave(&timr->it_lock, flags);
|
||||
|
||||
timr->it_active = 0;
|
||||
if (timr->it_interval != 0)
|
||||
si_private = ++timr->it_requeue_pending;
|
||||
|
||||
if (posix_timer_event(timr, si_private)) {
|
||||
if (posix_timer_queue_signal(timr)) {
|
||||
/*
|
||||
* The signal was not queued due to SIG_IGN. As a
|
||||
* consequence the timer is not going to be rearmed from
|
||||
@ -338,14 +339,14 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
|
||||
* change to the signal handling code.
|
||||
*
|
||||
* For now let timers with an interval less than a
|
||||
* jiffie expire every jiffie and recheck for a
|
||||
* jiffy expire every jiffy and recheck for a
|
||||
* valid signal handler.
|
||||
*
|
||||
* This avoids interrupt starvation in case of a
|
||||
* very small interval, which would expire the
|
||||
* timer immediately again.
|
||||
*
|
||||
* Moving now ahead of time by one jiffie tricks
|
||||
* Moving now ahead of time by one jiffy tricks
|
||||
* hrtimer_forward() to expire the timer later,
|
||||
* while it still maintains the overrun accuracy
|
||||
* for the price of a slight inconsistency in the
|
||||
@ -515,7 +516,7 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
|
||||
spin_lock_irq(¤t->sighand->siglock);
|
||||
/* This makes the timer valid in the hash table */
|
||||
WRITE_ONCE(new_timer->it_signal, current->signal);
|
||||
list_add(&new_timer->list, ¤t->signal->posix_timers);
|
||||
hlist_add_head(&new_timer->list, ¤t->signal->posix_timers);
|
||||
spin_unlock_irq(¤t->sighand->siglock);
|
||||
/*
|
||||
* After unlocking sighand::siglock @new_timer is subject to
|
||||
@ -856,6 +857,23 @@ static struct k_itimer *timer_wait_running(struct k_itimer *timer,
|
||||
return lock_timer(timer_id, flags);
|
||||
}
|
||||
|
||||
/*
|
||||
* Set up the new interval and reset the signal delivery data
|
||||
*/
|
||||
void posix_timer_set_common(struct k_itimer *timer, struct itimerspec64 *new_setting)
|
||||
{
|
||||
if (new_setting->it_value.tv_sec || new_setting->it_value.tv_nsec)
|
||||
timer->it_interval = timespec64_to_ktime(new_setting->it_interval);
|
||||
else
|
||||
timer->it_interval = 0;
|
||||
|
||||
/* Prevent reloading in case there is a signal pending */
|
||||
timer->it_requeue_pending = (timer->it_requeue_pending + 2) & ~REQUEUE_PENDING;
|
||||
/* Reset overrun accounting */
|
||||
timer->it_overrun_last = 0;
|
||||
timer->it_overrun = -1LL;
|
||||
}
|
||||
|
||||
/* Set a POSIX.1b interval timer. */
|
||||
int common_timer_set(struct k_itimer *timr, int flags,
|
||||
struct itimerspec64 *new_setting,
|
||||
@ -878,15 +896,12 @@ int common_timer_set(struct k_itimer *timr, int flags,
|
||||
return TIMER_RETRY;
|
||||
|
||||
timr->it_active = 0;
|
||||
timr->it_requeue_pending = (timr->it_requeue_pending + 2) &
|
||||
~REQUEUE_PENDING;
|
||||
timr->it_overrun_last = 0;
|
||||
posix_timer_set_common(timr, new_setting);
|
||||
|
||||
/* Switch off the timer when it_value is zero */
|
||||
/* Keep timer disarmed when it_value is zero */
|
||||
if (!new_setting->it_value.tv_sec && !new_setting->it_value.tv_nsec)
|
||||
return 0;
|
||||
|
||||
timr->it_interval = timespec64_to_ktime(new_setting->it_interval);
|
||||
expires = timespec64_to_ktime(new_setting->it_value);
|
||||
if (flags & TIMER_ABSTIME)
|
||||
expires = timens_ktime_to_host(timr->it_clock, expires);
|
||||
@ -904,7 +919,7 @@ static int do_timer_settime(timer_t timer_id, int tmr_flags,
|
||||
const struct k_clock *kc;
|
||||
struct k_itimer *timr;
|
||||
unsigned long flags;
|
||||
int error = 0;
|
||||
int error;
|
||||
|
||||
if (!timespec64_valid(&new_spec64->it_interval) ||
|
||||
!timespec64_valid(&new_spec64->it_value))
|
||||
@ -918,6 +933,9 @@ retry:
|
||||
if (!timr)
|
||||
return -EINVAL;
|
||||
|
||||
if (old_spec64)
|
||||
old_spec64->it_interval = ktime_to_timespec64(timr->it_interval);
|
||||
|
||||
kc = timr->kclock;
|
||||
if (WARN_ON_ONCE(!kc || !kc->timer_set))
|
||||
error = -EINVAL;
|
||||
@ -1021,7 +1039,7 @@ retry_delete:
|
||||
}
|
||||
|
||||
spin_lock(¤t->sighand->siglock);
|
||||
list_del(&timer->list);
|
||||
hlist_del(&timer->list);
|
||||
spin_unlock(¤t->sighand->siglock);
|
||||
/*
|
||||
* A concurrent lookup could check timer::it_signal lockless. It
|
||||
@ -1071,7 +1089,7 @@ retry_delete:
|
||||
|
||||
goto retry_delete;
|
||||
}
|
||||
list_del(&timer->list);
|
||||
hlist_del(&timer->list);
|
||||
|
||||
/*
|
||||
* Setting timer::it_signal to NULL is technically not required
|
||||
@ -1092,22 +1110,19 @@ retry_delete:
|
||||
*/
|
||||
void exit_itimers(struct task_struct *tsk)
|
||||
{
|
||||
struct list_head timers;
|
||||
struct k_itimer *tmr;
|
||||
struct hlist_head timers;
|
||||
|
||||
if (list_empty(&tsk->signal->posix_timers))
|
||||
if (hlist_empty(&tsk->signal->posix_timers))
|
||||
return;
|
||||
|
||||
/* Protect against concurrent read via /proc/$PID/timers */
|
||||
spin_lock_irq(&tsk->sighand->siglock);
|
||||
list_replace_init(&tsk->signal->posix_timers, &timers);
|
||||
hlist_move_list(&tsk->signal->posix_timers, &timers);
|
||||
spin_unlock_irq(&tsk->sighand->siglock);
|
||||
|
||||
/* The timers are not longer accessible via tsk::signal */
|
||||
while (!list_empty(&timers)) {
|
||||
tmr = list_first_entry(&timers, struct k_itimer, list);
|
||||
itimer_delete(tmr);
|
||||
}
|
||||
while (!hlist_empty(&timers))
|
||||
itimer_delete(hlist_entry(timers.first, struct k_itimer, list));
|
||||
}
|
||||
|
||||
SYSCALL_DEFINE2(clock_settime, const clockid_t, which_clock,
|
||||
|
@ -36,10 +36,11 @@ extern const struct k_clock clock_process;
|
||||
extern const struct k_clock clock_thread;
|
||||
extern const struct k_clock alarm_clock;
|
||||
|
||||
int posix_timer_event(struct k_itimer *timr, int si_private);
|
||||
int posix_timer_queue_signal(struct k_itimer *timr);
|
||||
|
||||
void common_timer_get(struct k_itimer *timr, struct itimerspec64 *cur_setting);
|
||||
int common_timer_set(struct k_itimer *timr, int flags,
|
||||
struct itimerspec64 *new_setting,
|
||||
struct itimerspec64 *old_setting);
|
||||
void posix_timer_set_common(struct k_itimer *timer, struct itimerspec64 *new_setting);
|
||||
int common_timer_del(struct k_itimer *timer);
|
||||
|
@ -2553,6 +2553,7 @@ int do_adjtimex(struct __kernel_timex *txc)
|
||||
{
|
||||
struct timekeeper *tk = &tk_core.timekeeper;
|
||||
struct audit_ntp_data ad;
|
||||
bool offset_set = false;
|
||||
bool clock_set = false;
|
||||
struct timespec64 ts;
|
||||
unsigned long flags;
|
||||
@ -2575,6 +2576,7 @@ int do_adjtimex(struct __kernel_timex *txc)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
offset_set = delta.tv_sec != 0;
|
||||
audit_tk_injoffset(delta);
|
||||
}
|
||||
|
||||
@ -2608,7 +2610,7 @@ int do_adjtimex(struct __kernel_timex *txc)
|
||||
if (clock_set)
|
||||
clock_was_set(CLOCK_SET_WALL);
|
||||
|
||||
ntp_notify_cmos_timer();
|
||||
ntp_notify_cmos_timer(offset_set);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -365,7 +365,7 @@ static unsigned long round_jiffies_common(unsigned long j, int cpu,
|
||||
rem = j % HZ;
|
||||
|
||||
/*
|
||||
* If the target jiffie is just after a whole second (which can happen
|
||||
* If the target jiffy is just after a whole second (which can happen
|
||||
* due to delays of the timer irq, long irq off times etc etc) then
|
||||
* we should round down to the whole second, not up. Use 1/4th second
|
||||
* as cutoff for this rounding as an extreme upper bound for this.
|
||||
@ -672,7 +672,7 @@ static void enqueue_timer(struct timer_base *base, struct timer_list *timer,
|
||||
* Set the next expiry time and kick the CPU so it
|
||||
* can reevaluate the wheel:
|
||||
*/
|
||||
base->next_expiry = bucket_expiry;
|
||||
WRITE_ONCE(base->next_expiry, bucket_expiry);
|
||||
base->timers_pending = true;
|
||||
base->next_expiry_recalc = false;
|
||||
trigger_dyntick_cpu(base, timer);
|
||||
@ -1561,6 +1561,8 @@ static inline void timer_base_unlock_expiry(struct timer_base *base)
|
||||
* the waiter to acquire the lock and make progress.
|
||||
*/
|
||||
static void timer_sync_wait_running(struct timer_base *base)
|
||||
__releases(&base->lock) __releases(&base->expiry_lock)
|
||||
__acquires(&base->expiry_lock) __acquires(&base->lock)
|
||||
{
|
||||
if (atomic_read(&base->timer_waiters)) {
|
||||
raw_spin_unlock_irq(&base->lock);
|
||||
@ -1898,7 +1900,7 @@ static int next_pending_bucket(struct timer_base *base, unsigned offset,
|
||||
*
|
||||
* Store next expiry time in base->next_expiry.
|
||||
*/
|
||||
static void next_expiry_recalc(struct timer_base *base)
|
||||
static void timer_recalc_next_expiry(struct timer_base *base)
|
||||
{
|
||||
unsigned long clk, next, adj;
|
||||
unsigned lvl, offset = 0;
|
||||
@ -1928,7 +1930,7 @@ static void next_expiry_recalc(struct timer_base *base)
|
||||
* bits are zero, we look at the next level as is. If not we
|
||||
* need to advance it by one because that's going to be the
|
||||
* next expiring bucket in that level. base->clk is the next
|
||||
* expiring jiffie. So in case of:
|
||||
* expiring jiffy. So in case of:
|
||||
*
|
||||
* LVL5 LVL4 LVL3 LVL2 LVL1 LVL0
|
||||
* 0 0 0 0 0 0
|
||||
@ -1964,7 +1966,7 @@ static void next_expiry_recalc(struct timer_base *base)
|
||||
clk += adj;
|
||||
}
|
||||
|
||||
base->next_expiry = next;
|
||||
WRITE_ONCE(base->next_expiry, next);
|
||||
base->next_expiry_recalc = false;
|
||||
base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA);
|
||||
}
|
||||
@ -1993,7 +1995,7 @@ static u64 cmp_next_hrtimer_event(u64 basem, u64 expires)
|
||||
return basem;
|
||||
|
||||
/*
|
||||
* Round up to the next jiffie. High resolution timers are
|
||||
* Round up to the next jiffy. High resolution timers are
|
||||
* off, so the hrtimers are expired in the tick and we need to
|
||||
* make sure that this tick really expires the timer to avoid
|
||||
* a ping pong of the nohz stop code.
|
||||
@ -2007,7 +2009,7 @@ static unsigned long next_timer_interrupt(struct timer_base *base,
|
||||
unsigned long basej)
|
||||
{
|
||||
if (base->next_expiry_recalc)
|
||||
next_expiry_recalc(base);
|
||||
timer_recalc_next_expiry(base);
|
||||
|
||||
/*
|
||||
* Move next_expiry for the empty base into the future to prevent an
|
||||
@ -2018,7 +2020,7 @@ static unsigned long next_timer_interrupt(struct timer_base *base,
|
||||
* easy comparable to find out which base holds the first pending timer.
|
||||
*/
|
||||
if (!base->timers_pending)
|
||||
base->next_expiry = basej + NEXT_TIMER_MAX_DELTA;
|
||||
WRITE_ONCE(base->next_expiry, basej + NEXT_TIMER_MAX_DELTA);
|
||||
|
||||
return base->next_expiry;
|
||||
}
|
||||
@ -2252,7 +2254,7 @@ static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem,
|
||||
base_global, &tevt);
|
||||
|
||||
/*
|
||||
* If the next event is only one jiffie ahead there is no need to call
|
||||
* If the next event is only one jiffy ahead there is no need to call
|
||||
* timer migration hierarchy related functions. The value for the next
|
||||
* global timer in @tevt struct equals then KTIME_MAX. This is also
|
||||
* true, when the timer base is idle.
|
||||
@ -2411,7 +2413,7 @@ static inline void __run_timers(struct timer_base *base)
|
||||
* jiffies to avoid endless requeuing to current jiffies.
|
||||
*/
|
||||
base->clk++;
|
||||
next_expiry_recalc(base);
|
||||
timer_recalc_next_expiry(base);
|
||||
|
||||
while (levels--)
|
||||
expire_timers(base, heads + levels);
|
||||
@ -2462,8 +2464,40 @@ static void run_local_timers(void)
|
||||
hrtimer_run_queues();
|
||||
|
||||
for (int i = 0; i < NR_BASES; i++, base++) {
|
||||
/* Raise the softirq only if required. */
|
||||
if (time_after_eq(jiffies, base->next_expiry) ||
|
||||
/*
|
||||
* Raise the softirq only if required.
|
||||
*
|
||||
* timer_base::next_expiry can be written by a remote CPU while
|
||||
* holding the lock. If this write happens at the same time than
|
||||
* the lockless local read, sanity checker could complain about
|
||||
* data corruption.
|
||||
*
|
||||
* There are two possible situations where
|
||||
* timer_base::next_expiry is written by a remote CPU:
|
||||
*
|
||||
* 1. Remote CPU expires global timers of this CPU and updates
|
||||
* timer_base::next_expiry of BASE_GLOBAL afterwards in
|
||||
* next_timer_interrupt() or timer_recalc_next_expiry(). The
|
||||
* worst outcome is a superfluous raise of the timer softirq
|
||||
* when the not yet updated value is read.
|
||||
*
|
||||
* 2. A new first pinned timer is enqueued by a remote CPU
|
||||
* and therefore timer_base::next_expiry of BASE_LOCAL is
|
||||
* updated. When this update is missed, this isn't a
|
||||
* problem, as an IPI is executed nevertheless when the CPU
|
||||
* was idle before. When the CPU wasn't idle but the update
|
||||
* is missed, then the timer would expire one jiffy late -
|
||||
* bad luck.
|
||||
*
|
||||
* Those unlikely corner cases where the worst outcome is only a
|
||||
* one jiffy delay or a superfluous raise of the softirq are
|
||||
* not that expensive as doing the check always while holding
|
||||
* the lock.
|
||||
*
|
||||
* Possible remote writers are using WRITE_ONCE(). Local reader
|
||||
* uses therefore READ_ONCE().
|
||||
*/
|
||||
if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) ||
|
||||
(i == BASE_DEF && tmigr_requires_handle_remote())) {
|
||||
raise_softirq(TIMER_SOFTIRQ);
|
||||
return;
|
||||
@ -2730,7 +2764,7 @@ void __init init_timers(void)
|
||||
*/
|
||||
void msleep(unsigned int msecs)
|
||||
{
|
||||
unsigned long timeout = msecs_to_jiffies(msecs) + 1;
|
||||
unsigned long timeout = msecs_to_jiffies(msecs);
|
||||
|
||||
while (timeout)
|
||||
timeout = schedule_timeout_uninterruptible(timeout);
|
||||
@ -2744,7 +2778,7 @@ EXPORT_SYMBOL(msleep);
|
||||
*/
|
||||
unsigned long msleep_interruptible(unsigned int msecs)
|
||||
{
|
||||
unsigned long timeout = msecs_to_jiffies(msecs) + 1;
|
||||
unsigned long timeout = msecs_to_jiffies(msecs);
|
||||
|
||||
while (timeout && !signal_pending(current))
|
||||
timeout = schedule_timeout_interruptible(timeout);
|
||||
|
@ -97,7 +97,7 @@ config BOOT_PRINTK_DELAY
|
||||
using "boot_delay=N".
|
||||
|
||||
It is likely that you would also need to use "lpj=M" to preset
|
||||
the "loops per jiffie" value.
|
||||
the "loops per jiffy" value.
|
||||
See a previous boot log for the "lpj" value to use for your
|
||||
system, and then set "lpj=M" before setting "boot_delay=N".
|
||||
NOTE: Using this option may adversely affect SMP systems.
|
||||
|
@ -287,7 +287,7 @@ struct batadv_frag_table_entry {
|
||||
/** @lock: lock to protect the list of fragments */
|
||||
spinlock_t lock;
|
||||
|
||||
/** @timestamp: time (jiffie) of last received fragment */
|
||||
/** @timestamp: time (jiffy) of last received fragment */
|
||||
unsigned long timestamp;
|
||||
|
||||
/** @seqno: sequence number of the fragments in the list */
|
||||
|
@ -6,10 +6,13 @@
|
||||
*
|
||||
* Kernel loop code stolen from Steven Rostedt <srostedt@redhat.com>
|
||||
*/
|
||||
|
||||
#define _GNU_SOURCE
|
||||
#include <sys/time.h>
|
||||
#include <sys/types.h>
|
||||
#include <stdio.h>
|
||||
#include <signal.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <time.h>
|
||||
#include <pthread.h>
|
||||
@ -18,6 +21,21 @@
|
||||
|
||||
#define DELAY 2
|
||||
#define USECS_PER_SEC 1000000
|
||||
#define NSECS_PER_SEC 1000000000
|
||||
|
||||
static void __fatal_error(const char *test, const char *name, const char *what)
|
||||
{
|
||||
char buf[64];
|
||||
|
||||
strerror_r(errno, buf, sizeof(buf));
|
||||
|
||||
if (name && strlen(name))
|
||||
ksft_exit_fail_msg("%s %s %s %s\n", test, name, what, buf);
|
||||
else
|
||||
ksft_exit_fail_msg("%s %s %s\n", test, what, buf);
|
||||
}
|
||||
|
||||
#define fatal_error(name, what) __fatal_error(__func__, name, what)
|
||||
|
||||
static volatile int done;
|
||||
|
||||
@ -74,24 +92,13 @@ static int check_diff(struct timeval start, struct timeval end)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int check_itimer(int which)
|
||||
static void check_itimer(int which, const char *name)
|
||||
{
|
||||
const char *name;
|
||||
int err;
|
||||
struct timeval start, end;
|
||||
struct itimerval val = {
|
||||
.it_value.tv_sec = DELAY,
|
||||
};
|
||||
|
||||
if (which == ITIMER_VIRTUAL)
|
||||
name = "ITIMER_VIRTUAL";
|
||||
else if (which == ITIMER_PROF)
|
||||
name = "ITIMER_PROF";
|
||||
else if (which == ITIMER_REAL)
|
||||
name = "ITIMER_REAL";
|
||||
else
|
||||
return -1;
|
||||
|
||||
done = 0;
|
||||
|
||||
if (which == ITIMER_VIRTUAL)
|
||||
@ -101,17 +108,11 @@ static int check_itimer(int which)
|
||||
else if (which == ITIMER_REAL)
|
||||
signal(SIGALRM, sig_handler);
|
||||
|
||||
err = gettimeofday(&start, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't call gettimeofday()");
|
||||
return -1;
|
||||
}
|
||||
if (gettimeofday(&start, NULL) < 0)
|
||||
fatal_error(name, "gettimeofday()");
|
||||
|
||||
err = setitimer(which, &val, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't set timer");
|
||||
return -1;
|
||||
}
|
||||
if (setitimer(which, &val, NULL) < 0)
|
||||
fatal_error(name, "setitimer()");
|
||||
|
||||
if (which == ITIMER_VIRTUAL)
|
||||
user_loop();
|
||||
@ -120,68 +121,41 @@ static int check_itimer(int which)
|
||||
else if (which == ITIMER_REAL)
|
||||
idle_loop();
|
||||
|
||||
err = gettimeofday(&end, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't call gettimeofday()");
|
||||
return -1;
|
||||
}
|
||||
if (gettimeofday(&end, NULL) < 0)
|
||||
fatal_error(name, "gettimeofday()");
|
||||
|
||||
ksft_test_result(check_diff(start, end) == 0, "%s\n", name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int check_timer_create(int which)
|
||||
static void check_timer_create(int which, const char *name)
|
||||
{
|
||||
const char *type;
|
||||
int err;
|
||||
timer_t id;
|
||||
struct timeval start, end;
|
||||
struct itimerspec val = {
|
||||
.it_value.tv_sec = DELAY,
|
||||
};
|
||||
|
||||
if (which == CLOCK_THREAD_CPUTIME_ID) {
|
||||
type = "thread";
|
||||
} else if (which == CLOCK_PROCESS_CPUTIME_ID) {
|
||||
type = "process";
|
||||
} else {
|
||||
ksft_print_msg("Unknown timer_create() type %d\n", which);
|
||||
return -1;
|
||||
}
|
||||
timer_t id;
|
||||
|
||||
done = 0;
|
||||
err = timer_create(which, NULL, &id);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't create timer");
|
||||
return -1;
|
||||
}
|
||||
signal(SIGALRM, sig_handler);
|
||||
|
||||
err = gettimeofday(&start, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't call gettimeofday()");
|
||||
return -1;
|
||||
}
|
||||
if (timer_create(which, NULL, &id) < 0)
|
||||
fatal_error(name, "timer_create()");
|
||||
|
||||
err = timer_settime(id, 0, &val, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't set timer");
|
||||
return -1;
|
||||
}
|
||||
if (signal(SIGALRM, sig_handler) == SIG_ERR)
|
||||
fatal_error(name, "signal()");
|
||||
|
||||
if (gettimeofday(&start, NULL) < 0)
|
||||
fatal_error(name, "gettimeofday()");
|
||||
|
||||
if (timer_settime(id, 0, &val, NULL) < 0)
|
||||
fatal_error(name, "timer_settime()");
|
||||
|
||||
user_loop();
|
||||
|
||||
err = gettimeofday(&end, NULL);
|
||||
if (err < 0) {
|
||||
ksft_perror("Can't call gettimeofday()");
|
||||
return -1;
|
||||
}
|
||||
if (gettimeofday(&end, NULL) < 0)
|
||||
fatal_error(name, "gettimeofday()");
|
||||
|
||||
ksft_test_result(check_diff(start, end) == 0,
|
||||
"timer_create() per %s\n", type);
|
||||
|
||||
return 0;
|
||||
"timer_create() per %s\n", name);
|
||||
}
|
||||
|
||||
static pthread_t ctd_thread;
|
||||
@ -209,15 +183,14 @@ static void *ctd_thread_func(void *arg)
|
||||
|
||||
ctd_count = 100;
|
||||
if (timer_create(CLOCK_PROCESS_CPUTIME_ID, NULL, &id))
|
||||
return "Can't create timer\n";
|
||||
fatal_error(NULL, "timer_create()");
|
||||
if (timer_settime(id, 0, &val, NULL))
|
||||
return "Can't set timer\n";
|
||||
|
||||
fatal_error(NULL, "timer_settime()");
|
||||
while (ctd_count > 0 && !ctd_failed)
|
||||
;
|
||||
|
||||
if (timer_delete(id))
|
||||
return "Can't delete timer\n";
|
||||
fatal_error(NULL, "timer_delete()");
|
||||
|
||||
return NULL;
|
||||
}
|
||||
@ -225,19 +198,16 @@ static void *ctd_thread_func(void *arg)
|
||||
/*
|
||||
* Test that only the running thread receives the timer signal.
|
||||
*/
|
||||
static int check_timer_distribution(void)
|
||||
static void check_timer_distribution(void)
|
||||
{
|
||||
const char *errmsg;
|
||||
if (signal(SIGALRM, ctd_sighandler) == SIG_ERR)
|
||||
fatal_error(NULL, "signal()");
|
||||
|
||||
signal(SIGALRM, ctd_sighandler);
|
||||
|
||||
errmsg = "Can't create thread\n";
|
||||
if (pthread_create(&ctd_thread, NULL, ctd_thread_func, NULL))
|
||||
goto err;
|
||||
fatal_error(NULL, "pthread_create()");
|
||||
|
||||
errmsg = "Can't join thread\n";
|
||||
if (pthread_join(ctd_thread, (void **)&errmsg) || errmsg)
|
||||
goto err;
|
||||
if (pthread_join(ctd_thread, NULL))
|
||||
fatal_error(NULL, "pthread_join()");
|
||||
|
||||
if (!ctd_failed)
|
||||
ksft_test_result_pass("check signal distribution\n");
|
||||
@ -245,31 +215,399 @@ static int check_timer_distribution(void)
|
||||
ksft_test_result_fail("check signal distribution\n");
|
||||
else
|
||||
ksft_test_result_skip("check signal distribution (old kernel)\n");
|
||||
return 0;
|
||||
err:
|
||||
ksft_print_msg("%s", errmsg);
|
||||
return -1;
|
||||
}
|
||||
|
||||
struct tmrsig {
|
||||
int signals;
|
||||
int overruns;
|
||||
};
|
||||
|
||||
static void siginfo_handler(int sig, siginfo_t *si, void *uc)
|
||||
{
|
||||
struct tmrsig *tsig = si ? si->si_ptr : NULL;
|
||||
|
||||
if (tsig) {
|
||||
tsig->signals++;
|
||||
tsig->overruns += si->si_overrun;
|
||||
}
|
||||
}
|
||||
|
||||
static void *ignore_thread(void *arg)
|
||||
{
|
||||
unsigned int *tid = arg;
|
||||
sigset_t set;
|
||||
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
*tid = gettid();
|
||||
sleep(100);
|
||||
|
||||
if (sigprocmask(SIG_UNBLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void check_sig_ign(int thread)
|
||||
{
|
||||
struct tmrsig tsig = { };
|
||||
struct itimerspec its;
|
||||
unsigned int tid = 0;
|
||||
struct sigaction sa;
|
||||
struct sigevent sev;
|
||||
pthread_t pthread;
|
||||
timer_t timerid;
|
||||
sigset_t set;
|
||||
|
||||
if (thread) {
|
||||
if (pthread_create(&pthread, NULL, ignore_thread, &tid))
|
||||
fatal_error(NULL, "pthread_create()");
|
||||
sleep(1);
|
||||
}
|
||||
|
||||
sa.sa_flags = SA_SIGINFO;
|
||||
sa.sa_sigaction = siginfo_handler;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
if (sigaction(SIGUSR1, &sa, NULL))
|
||||
fatal_error(NULL, "sigaction()");
|
||||
|
||||
/* Block the signal */
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_SIGNAL;
|
||||
sev.sigev_signo = SIGUSR1;
|
||||
sev.sigev_value.sival_ptr = &tsig;
|
||||
if (thread) {
|
||||
sev.sigev_notify = SIGEV_THREAD_ID;
|
||||
sev._sigev_un._tid = tid;
|
||||
}
|
||||
|
||||
if (timer_create(CLOCK_MONOTONIC, &sev, &timerid))
|
||||
fatal_error(NULL, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
timer_settime(timerid, 0, &its, NULL);
|
||||
|
||||
sleep(1);
|
||||
|
||||
/* Set the signal to be ignored */
|
||||
if (signal(SIGUSR1, SIG_IGN) == SIG_ERR)
|
||||
fatal_error(NULL, "signal(SIG_IGN)");
|
||||
|
||||
sleep(1);
|
||||
|
||||
if (thread) {
|
||||
/* Stop the thread first. No signal should be delivered to it */
|
||||
if (pthread_cancel(pthread))
|
||||
fatal_error(NULL, "pthread_cancel()");
|
||||
if (pthread_join(pthread, NULL))
|
||||
fatal_error(NULL, "pthread_join()");
|
||||
}
|
||||
|
||||
/* Restore the handler */
|
||||
if (sigaction(SIGUSR1, &sa, NULL))
|
||||
fatal_error(NULL, "sigaction()");
|
||||
|
||||
sleep(1);
|
||||
|
||||
/* Unblock it, which should deliver the signal in the !thread case*/
|
||||
if (sigprocmask(SIG_UNBLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)");
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(NULL, "timer_delete()");
|
||||
|
||||
if (!thread) {
|
||||
ksft_test_result(tsig.signals == 1 && tsig.overruns == 29,
|
||||
"check_sig_ign SIGEV_SIGNAL\n");
|
||||
} else {
|
||||
ksft_test_result(tsig.signals == 0 && tsig.overruns == 0,
|
||||
"check_sig_ign SIGEV_THREAD_ID\n");
|
||||
}
|
||||
}
|
||||
|
||||
static void check_rearm(void)
|
||||
{
|
||||
struct tmrsig tsig = { };
|
||||
struct itimerspec its;
|
||||
struct sigaction sa;
|
||||
struct sigevent sev;
|
||||
timer_t timerid;
|
||||
sigset_t set;
|
||||
|
||||
sa.sa_flags = SA_SIGINFO;
|
||||
sa.sa_sigaction = siginfo_handler;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
if (sigaction(SIGUSR1, &sa, NULL))
|
||||
fatal_error(NULL, "sigaction()");
|
||||
|
||||
/* Block the signal */
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_SIGNAL;
|
||||
sev.sigev_signo = SIGUSR1;
|
||||
sev.sigev_value.sival_ptr = &tsig;
|
||||
if (timer_create(CLOCK_MONOTONIC, &sev, &timerid))
|
||||
fatal_error(NULL, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
if (timer_settime(timerid, 0, &its, NULL))
|
||||
fatal_error(NULL, "timer_settime()");
|
||||
|
||||
sleep(1);
|
||||
|
||||
/* Reprogram the timer to single shot */
|
||||
its.it_value.tv_sec = 10;
|
||||
its.it_value.tv_nsec = 0;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 0;
|
||||
if (timer_settime(timerid, 0, &its, NULL))
|
||||
fatal_error(NULL, "timer_settime()");
|
||||
|
||||
/* Unblock it, which should not deliver a signal */
|
||||
if (sigprocmask(SIG_UNBLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)");
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(NULL, "timer_delete()");
|
||||
|
||||
ksft_test_result(!tsig.signals, "check_rearm\n");
|
||||
}
|
||||
|
||||
static void check_delete(void)
|
||||
{
|
||||
struct tmrsig tsig = { };
|
||||
struct itimerspec its;
|
||||
struct sigaction sa;
|
||||
struct sigevent sev;
|
||||
timer_t timerid;
|
||||
sigset_t set;
|
||||
|
||||
sa.sa_flags = SA_SIGINFO;
|
||||
sa.sa_sigaction = siginfo_handler;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
if (sigaction(SIGUSR1, &sa, NULL))
|
||||
fatal_error(NULL, "sigaction()");
|
||||
|
||||
/* Block the signal */
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_SIGNAL;
|
||||
sev.sigev_signo = SIGUSR1;
|
||||
sev.sigev_value.sival_ptr = &tsig;
|
||||
if (timer_create(CLOCK_MONOTONIC, &sev, &timerid))
|
||||
fatal_error(NULL, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
if (timer_settime(timerid, 0, &its, NULL))
|
||||
fatal_error(NULL, "timer_settime()");
|
||||
|
||||
sleep(1);
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(NULL, "timer_delete()");
|
||||
|
||||
/* Unblock it, which should not deliver a signal */
|
||||
if (sigprocmask(SIG_UNBLOCK, &set, NULL))
|
||||
fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)");
|
||||
|
||||
ksft_test_result(!tsig.signals, "check_delete\n");
|
||||
}
|
||||
|
||||
static inline int64_t calcdiff_ns(struct timespec t1, struct timespec t2)
|
||||
{
|
||||
int64_t diff;
|
||||
|
||||
diff = NSECS_PER_SEC * (int64_t)((int) t1.tv_sec - (int) t2.tv_sec);
|
||||
diff += ((int) t1.tv_nsec - (int) t2.tv_nsec);
|
||||
return diff;
|
||||
}
|
||||
|
||||
static void check_sigev_none(int which, const char *name)
|
||||
{
|
||||
struct timespec start, now;
|
||||
struct itimerspec its;
|
||||
struct sigevent sev;
|
||||
timer_t timerid;
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_NONE;
|
||||
|
||||
if (timer_create(which, &sev, &timerid))
|
||||
fatal_error(name, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
timer_settime(timerid, 0, &its, NULL);
|
||||
|
||||
if (clock_gettime(which, &start))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
|
||||
do {
|
||||
if (clock_gettime(which, &now))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
|
||||
|
||||
if (timer_gettime(timerid, &its))
|
||||
fatal_error(name, "timer_gettime()");
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(name, "timer_delete()");
|
||||
|
||||
ksft_test_result(its.it_value.tv_sec || its.it_value.tv_nsec,
|
||||
"check_sigev_none %s\n", name);
|
||||
}
|
||||
|
||||
static void check_gettime(int which, const char *name)
|
||||
{
|
||||
struct itimerspec its, prev;
|
||||
struct timespec start, now;
|
||||
struct sigevent sev;
|
||||
timer_t timerid;
|
||||
int wraps = 0;
|
||||
sigset_t set;
|
||||
|
||||
/* Block the signal */
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(name, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_SIGNAL;
|
||||
sev.sigev_signo = SIGUSR1;
|
||||
|
||||
if (timer_create(which, &sev, &timerid))
|
||||
fatal_error(name, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
if (timer_settime(timerid, 0, &its, NULL))
|
||||
fatal_error(name, "timer_settime()");
|
||||
|
||||
if (timer_gettime(timerid, &prev))
|
||||
fatal_error(name, "timer_gettime()");
|
||||
|
||||
if (clock_gettime(which, &start))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
|
||||
do {
|
||||
if (clock_gettime(which, &now))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
if (timer_gettime(timerid, &its))
|
||||
fatal_error(name, "timer_gettime()");
|
||||
if (its.it_value.tv_nsec > prev.it_value.tv_nsec)
|
||||
wraps++;
|
||||
prev = its;
|
||||
|
||||
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(name, "timer_delete()");
|
||||
|
||||
ksft_test_result(wraps > 1, "check_gettime %s\n", name);
|
||||
}
|
||||
|
||||
static void check_overrun(int which, const char *name)
|
||||
{
|
||||
struct timespec start, now;
|
||||
struct tmrsig tsig = { };
|
||||
struct itimerspec its;
|
||||
struct sigaction sa;
|
||||
struct sigevent sev;
|
||||
timer_t timerid;
|
||||
sigset_t set;
|
||||
|
||||
sa.sa_flags = SA_SIGINFO;
|
||||
sa.sa_sigaction = siginfo_handler;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
if (sigaction(SIGUSR1, &sa, NULL))
|
||||
fatal_error(name, "sigaction()");
|
||||
|
||||
/* Block the signal */
|
||||
sigemptyset(&set);
|
||||
sigaddset(&set, SIGUSR1);
|
||||
if (sigprocmask(SIG_BLOCK, &set, NULL))
|
||||
fatal_error(name, "sigprocmask(SIG_BLOCK)");
|
||||
|
||||
memset(&sev, 0, sizeof(sev));
|
||||
sev.sigev_notify = SIGEV_SIGNAL;
|
||||
sev.sigev_signo = SIGUSR1;
|
||||
sev.sigev_value.sival_ptr = &tsig;
|
||||
if (timer_create(which, &sev, &timerid))
|
||||
fatal_error(name, "timer_create()");
|
||||
|
||||
/* Start the timer to expire in 100ms and 100ms intervals */
|
||||
its.it_value.tv_sec = 0;
|
||||
its.it_value.tv_nsec = 100000000;
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 100000000;
|
||||
if (timer_settime(timerid, 0, &its, NULL))
|
||||
fatal_error(name, "timer_settime()");
|
||||
|
||||
if (clock_gettime(which, &start))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
|
||||
do {
|
||||
if (clock_gettime(which, &now))
|
||||
fatal_error(name, "clock_gettime()");
|
||||
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
|
||||
|
||||
/* Unblock it, which should deliver a signal */
|
||||
if (sigprocmask(SIG_UNBLOCK, &set, NULL))
|
||||
fatal_error(name, "sigprocmask(SIG_UNBLOCK)");
|
||||
|
||||
if (timer_delete(timerid))
|
||||
fatal_error(name, "timer_delete()");
|
||||
|
||||
ksft_test_result(tsig.signals == 1 && tsig.overruns == 9,
|
||||
"check_overrun %s\n", name);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
ksft_print_header();
|
||||
ksft_set_plan(6);
|
||||
ksft_set_plan(18);
|
||||
|
||||
ksft_print_msg("Testing posix timers. False negative may happen on CPU execution \n");
|
||||
ksft_print_msg("based timers if other threads run on the CPU...\n");
|
||||
|
||||
if (check_itimer(ITIMER_VIRTUAL) < 0)
|
||||
ksft_exit_fail();
|
||||
|
||||
if (check_itimer(ITIMER_PROF) < 0)
|
||||
ksft_exit_fail();
|
||||
|
||||
if (check_itimer(ITIMER_REAL) < 0)
|
||||
ksft_exit_fail();
|
||||
|
||||
if (check_timer_create(CLOCK_THREAD_CPUTIME_ID) < 0)
|
||||
ksft_exit_fail();
|
||||
check_itimer(ITIMER_VIRTUAL, "ITIMER_VIRTUAL");
|
||||
check_itimer(ITIMER_PROF, "ITIMER_PROF");
|
||||
check_itimer(ITIMER_REAL, "ITIMER_REAL");
|
||||
check_timer_create(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID");
|
||||
|
||||
/*
|
||||
* It's unfortunately hard to reliably test a timer expiration
|
||||
@ -280,11 +618,21 @@ int main(int argc, char **argv)
|
||||
* to ensure true parallelism. So test only one thread until we
|
||||
* find a better solution.
|
||||
*/
|
||||
if (check_timer_create(CLOCK_PROCESS_CPUTIME_ID) < 0)
|
||||
ksft_exit_fail();
|
||||
check_timer_create(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID");
|
||||
check_timer_distribution();
|
||||
|
||||
if (check_timer_distribution() < 0)
|
||||
ksft_exit_fail();
|
||||
check_sig_ign(0);
|
||||
check_sig_ign(1);
|
||||
check_rearm();
|
||||
check_delete();
|
||||
check_sigev_none(CLOCK_MONOTONIC, "CLOCK_MONOTONIC");
|
||||
check_sigev_none(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID");
|
||||
check_gettime(CLOCK_MONOTONIC, "CLOCK_MONOTONIC");
|
||||
check_gettime(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID");
|
||||
check_gettime(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID");
|
||||
check_overrun(CLOCK_MONOTONIC, "CLOCK_MONOTONIC");
|
||||
check_overrun(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID");
|
||||
check_overrun(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID");
|
||||
|
||||
ksft_finished();
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user