2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* drivers/base/power/sysfs.c - sysfs entries for device PM
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/device.h>
|
2005-11-07 08:59:43 +00:00
|
|
|
#include <linux/string.h>
|
2011-05-27 11:12:15 +00:00
|
|
|
#include <linux/export.h>
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
#include <linux/pm_qos.h>
|
2010-01-23 21:02:51 +00:00
|
|
|
#include <linux/pm_runtime.h>
|
2011-07-26 23:09:06 +00:00
|
|
|
#include <linux/atomic.h>
|
2010-07-19 00:01:06 +00:00
|
|
|
#include <linux/jiffies.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include "power.h"
|
|
|
|
|
2005-09-13 02:39:34 +00:00
|
|
|
/*
|
2010-01-23 21:02:51 +00:00
|
|
|
* control - Report/change current runtime PM setting of the device
|
|
|
|
*
|
|
|
|
* Runtime power management of a device can be blocked with the help of
|
|
|
|
* this attribute. All devices have one of the following two values for
|
|
|
|
* the power/control file:
|
|
|
|
*
|
|
|
|
* + "auto\n" to allow the device to be power managed at run time;
|
|
|
|
* + "on\n" to prevent the device from being power managed at run time;
|
|
|
|
*
|
|
|
|
* The default for all devices is "auto", which means that devices may be
|
|
|
|
* subject to automatic power management, depending on their drivers.
|
|
|
|
* Changing this attribute to "on" prevents the driver from power managing
|
|
|
|
* the device at run time. Doing that while the device is suspended causes
|
|
|
|
* it to be woken up.
|
|
|
|
*
|
2005-09-13 02:39:34 +00:00
|
|
|
* wakeup - Report/change current wakeup option for device
|
|
|
|
*
|
|
|
|
* Some devices support "wakeup" events, which are hardware signals
|
|
|
|
* used to activate devices from suspended or low power states. Such
|
|
|
|
* devices have one of three values for the sysfs power/wakeup file:
|
|
|
|
*
|
|
|
|
* + "enabled\n" to issue the events;
|
|
|
|
* + "disabled\n" not to do so; or
|
|
|
|
* + "\n" for temporary or permanent inability to issue wakeup.
|
|
|
|
*
|
|
|
|
* (For example, unconfigured USB devices can't issue wakeups.)
|
|
|
|
*
|
|
|
|
* Familiar examples of devices that can issue wakeup events include
|
|
|
|
* keyboards and mice (both PS2 and USB styles), power buttons, modems,
|
|
|
|
* "Wake-On-LAN" Ethernet links, GPIO lines, and more. Some events
|
|
|
|
* will wake the entire system from a suspend state; others may just
|
|
|
|
* wake up the device (if the system as a whole is already active).
|
|
|
|
* Some wakeup events use normal IRQ lines; other use special out
|
|
|
|
* of band signaling.
|
|
|
|
*
|
|
|
|
* It is the responsibility of device drivers to enable (or disable)
|
|
|
|
* wakeup signaling as part of changing device power states, respecting
|
|
|
|
* the policy choices provided through the driver model.
|
|
|
|
*
|
|
|
|
* Devices may not be able to generate wakeup events from all power
|
|
|
|
* states. Also, the events may be ignored in some configurations;
|
|
|
|
* for example, they might need help from other devices that aren't
|
|
|
|
* active, or which may have wakeup disabled. Some drivers rely on
|
|
|
|
* wakeup events internally (unless they are disabled), keeping
|
|
|
|
* their hardware in low power modes whenever they're unused. This
|
|
|
|
* saves runtime power, without requiring system-wide sleep states.
|
2010-01-23 21:25:23 +00:00
|
|
|
*
|
|
|
|
* async - Report/change current async suspend setting for the device
|
|
|
|
*
|
|
|
|
* Asynchronous suspend and resume of the device during system-wide power
|
|
|
|
* state transitions can be enabled by writing "enabled" to this file.
|
|
|
|
* Analogously, if "disabled" is written to this file, the device will be
|
|
|
|
* suspended and resumed synchronously.
|
|
|
|
*
|
|
|
|
* All devices have one of the following two values for power/async:
|
|
|
|
*
|
|
|
|
* + "enabled\n" to permit the asynchronous suspend/resume of the device;
|
|
|
|
* + "disabled\n" to forbid it;
|
|
|
|
*
|
|
|
|
* NOTE: It generally is unsafe to permit the asynchronous suspend/resume
|
|
|
|
* of a device unless it is certain that all of the PM dependencies of the
|
|
|
|
* device are known to the PM core. However, for some devices this
|
|
|
|
* attribute is set to "enabled" by bus type code or device drivers and in
|
|
|
|
* that cases it should be safe to leave the default value.
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-05 20:43:53 +00:00
|
|
|
*
|
2010-09-25 21:35:21 +00:00
|
|
|
* autosuspend_delay_ms - Report/change a device's autosuspend_delay value
|
|
|
|
*
|
|
|
|
* Some drivers don't want to carry out a runtime suspend as soon as a
|
|
|
|
* device becomes idle; they want it always to remain idle for some period
|
|
|
|
* of time before suspending it. This period is the autosuspend_delay
|
|
|
|
* value (expressed in milliseconds) and it can be controlled by the user.
|
|
|
|
* If the value is negative then the device will never be runtime
|
|
|
|
* suspended.
|
|
|
|
*
|
|
|
|
* NOTE: The autosuspend_delay_ms attribute and the autosuspend_delay
|
|
|
|
* value are used only if the driver calls pm_runtime_use_autosuspend().
|
|
|
|
*
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-05 20:43:53 +00:00
|
|
|
* wakeup_count - Report the number of wakeup events related to the device
|
2005-09-13 02:39:34 +00:00
|
|
|
*/
|
|
|
|
|
2010-09-25 21:35:15 +00:00
|
|
|
const char power_group_name[] = "power";
|
|
|
|
EXPORT_SYMBOL_GPL(power_group_name);
|
|
|
|
|
2010-01-23 21:02:51 +00:00
|
|
|
static const char ctrl_auto[] = "auto";
|
|
|
|
static const char ctrl_on[] = "on";
|
|
|
|
|
|
|
|
static ssize_t control_show(struct device *dev, struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%s\n",
|
|
|
|
dev->power.runtime_auto ? ctrl_auto : ctrl_on);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t control_store(struct device * dev, struct device_attribute *attr,
|
|
|
|
const char * buf, size_t n)
|
|
|
|
{
|
|
|
|
char *cp;
|
|
|
|
int len = n;
|
|
|
|
|
|
|
|
cp = memchr(buf, '\n', n);
|
|
|
|
if (cp)
|
|
|
|
len = cp - buf;
|
2011-07-06 08:52:23 +00:00
|
|
|
device_lock(dev);
|
2010-01-23 21:02:51 +00:00
|
|
|
if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0)
|
|
|
|
pm_runtime_allow(dev);
|
|
|
|
else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0)
|
|
|
|
pm_runtime_forbid(dev);
|
|
|
|
else
|
2011-07-06 08:52:23 +00:00
|
|
|
n = -EINVAL;
|
|
|
|
device_unlock(dev);
|
2010-01-23 21:02:51 +00:00
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(control, 0644, control_show, control_store);
|
2010-07-07 22:05:37 +00:00
|
|
|
|
2010-07-19 00:01:06 +00:00
|
|
|
static ssize_t rtpm_active_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
update_pm_runtime_accounting(dev);
|
|
|
|
ret = sprintf(buf, "%i\n", jiffies_to_msecs(dev->power.active_jiffies));
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t rtpm_suspended_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
update_pm_runtime_accounting(dev);
|
|
|
|
ret = sprintf(buf, "%i\n",
|
|
|
|
jiffies_to_msecs(dev->power.suspended_jiffies));
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL);
|
|
|
|
|
2010-07-07 22:05:37 +00:00
|
|
|
static ssize_t rtpm_status_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
const char *p;
|
|
|
|
|
|
|
|
if (dev->power.runtime_error) {
|
|
|
|
p = "error\n";
|
|
|
|
} else if (dev->power.disable_depth) {
|
|
|
|
p = "unsupported\n";
|
|
|
|
} else {
|
|
|
|
switch (dev->power.runtime_status) {
|
|
|
|
case RPM_SUSPENDED:
|
|
|
|
p = "suspended\n";
|
|
|
|
break;
|
|
|
|
case RPM_SUSPENDING:
|
|
|
|
p = "suspending\n";
|
|
|
|
break;
|
|
|
|
case RPM_RESUMING:
|
|
|
|
p = "resuming\n";
|
|
|
|
break;
|
|
|
|
case RPM_ACTIVE:
|
|
|
|
p = "active\n";
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sprintf(buf, p);
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL);
|
2010-09-25 21:35:21 +00:00
|
|
|
|
|
|
|
static ssize_t autosuspend_delay_ms_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
if (!dev->power.use_autosuspend)
|
|
|
|
return -EIO;
|
|
|
|
return sprintf(buf, "%d\n", dev->power.autosuspend_delay);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t autosuspend_delay_ms_store(struct device *dev,
|
|
|
|
struct device_attribute *attr, const char *buf, size_t n)
|
|
|
|
{
|
|
|
|
long delay;
|
|
|
|
|
|
|
|
if (!dev->power.use_autosuspend)
|
|
|
|
return -EIO;
|
|
|
|
|
2013-07-26 04:10:22 +00:00
|
|
|
if (kstrtol(buf, 10, &delay) != 0 || delay != (int) delay)
|
2010-09-25 21:35:21 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2011-07-06 08:52:23 +00:00
|
|
|
device_lock(dev);
|
2010-09-25 21:35:21 +00:00
|
|
|
pm_runtime_set_autosuspend_delay(dev, delay);
|
2011-07-06 08:52:23 +00:00
|
|
|
device_unlock(dev);
|
2010-09-25 21:35:21 +00:00
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show,
|
|
|
|
autosuspend_delay_ms_store);
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
static ssize_t pm_qos_resume_latency_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
{
|
2014-02-10 23:35:23 +00:00
|
|
|
return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev));
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
}
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
static ssize_t pm_qos_resume_latency_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t n)
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
{
|
|
|
|
s32 value;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (kstrtos32(buf, 0, &value))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (value < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
|
|
|
|
value);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
return ret < 0 ? ret : n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(pm_qos_resume_latency_us, 0644,
|
2014-02-10 23:35:23 +00:00
|
|
|
pm_qos_resume_latency_show, pm_qos_resume_latency_store);
|
2012-10-24 00:08:18 +00:00
|
|
|
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
static ssize_t pm_qos_latency_tolerance_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
|
|
|
|
|
|
|
|
if (value < 0)
|
|
|
|
return sprintf(buf, "auto\n");
|
|
|
|
else if (value == PM_QOS_LATENCY_ANY)
|
|
|
|
return sprintf(buf, "any\n");
|
|
|
|
|
|
|
|
return sprintf(buf, "%d\n", value);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t n)
|
|
|
|
{
|
|
|
|
s32 value;
|
|
|
|
int ret;
|
|
|
|
|
2016-11-30 01:11:50 +00:00
|
|
|
if (kstrtos32(buf, 0, &value) == 0) {
|
|
|
|
/* Users can't write negative values directly */
|
|
|
|
if (value < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
} else {
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n"))
|
|
|
|
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
|
|
|
|
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
|
|
|
|
value = PM_QOS_LATENCY_ANY;
|
|
|
|
}
|
|
|
|
ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
|
|
|
|
return ret < 0 ? ret : n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644,
|
|
|
|
pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
|
|
|
|
|
2012-10-24 00:08:18 +00:00
|
|
|
static ssize_t pm_qos_no_power_off_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
|
|
|
|
& PM_QOS_FLAG_NO_POWER_OFF));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t pm_qos_no_power_off_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t n)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (kstrtoint(buf, 0, &ret))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ret != 0 && ret != 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_NO_POWER_OFF, ret);
|
|
|
|
return ret < 0 ? ret : n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(pm_qos_no_power_off, 0644,
|
|
|
|
pm_qos_no_power_off_show, pm_qos_no_power_off_store);
|
|
|
|
|
|
|
|
static ssize_t pm_qos_remote_wakeup_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
|
|
|
|
& PM_QOS_FLAG_REMOTE_WAKEUP));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t pm_qos_remote_wakeup_store(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t n)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (kstrtoint(buf, 0, &ret))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ret != 0 && ret != 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP, ret);
|
|
|
|
return ret < 0 ? ret : n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
|
|
|
|
pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store);
|
2010-01-23 21:02:51 +00:00
|
|
|
|
2011-05-07 16:11:52 +00:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
2014-09-08 06:48:00 +00:00
|
|
|
static const char _enabled[] = "enabled";
|
|
|
|
static const char _disabled[] = "disabled";
|
|
|
|
|
2005-09-13 02:39:34 +00:00
|
|
|
static ssize_t
|
|
|
|
wake_show(struct device * dev, struct device_attribute *attr, char * buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%s\n", device_can_wakeup(dev)
|
2014-09-08 06:48:00 +00:00
|
|
|
? (device_may_wakeup(dev) ? _enabled : _disabled)
|
2005-09-13 02:39:34 +00:00
|
|
|
: "");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
wake_store(struct device * dev, struct device_attribute *attr,
|
|
|
|
const char * buf, size_t n)
|
|
|
|
{
|
|
|
|
char *cp;
|
|
|
|
int len = n;
|
|
|
|
|
|
|
|
if (!device_can_wakeup(dev))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
cp = memchr(buf, '\n', n);
|
|
|
|
if (cp)
|
|
|
|
len = cp - buf;
|
2014-09-08 06:48:00 +00:00
|
|
|
if (len == sizeof _enabled - 1
|
|
|
|
&& strncmp(buf, _enabled, sizeof _enabled - 1) == 0)
|
2005-09-13 02:39:34 +00:00
|
|
|
device_set_wakeup_enable(dev, 1);
|
2014-09-08 06:48:00 +00:00
|
|
|
else if (len == sizeof _disabled - 1
|
|
|
|
&& strncmp(buf, _disabled, sizeof _disabled - 1) == 0)
|
2005-09-13 02:39:34 +00:00
|
|
|
device_set_wakeup_enable(dev, 0);
|
|
|
|
else
|
|
|
|
return -EINVAL;
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store);
|
|
|
|
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-05 20:43:53 +00:00
|
|
|
static ssize_t wakeup_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
2010-09-22 20:09:10 +00:00
|
|
|
unsigned long count = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
count = dev->power.wakeup->event_count;
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-05 20:43:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL);
|
2010-09-22 20:09:10 +00:00
|
|
|
|
|
|
|
static ssize_t wakeup_active_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
unsigned long count = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
count = dev->power.wakeup->active_count;
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL);
|
|
|
|
|
2012-04-29 20:52:52 +00:00
|
|
|
static ssize_t wakeup_abort_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
unsigned long count = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
count = dev->power.wakeup->wakeup_count;
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t wakeup_expire_count_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
2010-09-22 20:09:10 +00:00
|
|
|
{
|
|
|
|
unsigned long count = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
2012-04-29 20:52:52 +00:00
|
|
|
count = dev->power.wakeup->expire_count;
|
2010-09-22 20:09:10 +00:00
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
2012-04-29 20:52:52 +00:00
|
|
|
static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL);
|
2010-09-22 20:09:10 +00:00
|
|
|
|
|
|
|
static ssize_t wakeup_active_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
unsigned int active = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
active = dev->power.wakeup->active;
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t wakeup_total_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
s64 msec = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
msec = ktime_to_ms(dev->power.wakeup->total_time);
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t wakeup_max_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
s64 msec = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
msec = ktime_to_ms(dev->power.wakeup->max_time);
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t wakeup_last_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
s64 msec = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
msec = ktime_to_ms(dev->power.wakeup->last_time);
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL);
|
2012-04-29 20:53:32 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_PM_AUTOSLEEP
|
|
|
|
static ssize_t wakeup_prevent_sleep_time_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
s64 msec = 0;
|
|
|
|
bool enabled = false;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->power.lock);
|
|
|
|
if (dev->power.wakeup) {
|
|
|
|
msec = ktime_to_ms(dev->power.wakeup->prevent_sleep_time);
|
|
|
|
enabled = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444,
|
|
|
|
wakeup_prevent_sleep_time_show, NULL);
|
|
|
|
#endif /* CONFIG_PM_AUTOSLEEP */
|
2010-09-22 20:09:10 +00:00
|
|
|
#endif /* CONFIG_PM_SLEEP */
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-05 20:43:53 +00:00
|
|
|
|
2010-04-23 18:32:23 +00:00
|
|
|
#ifdef CONFIG_PM_ADVANCED_DEBUG
|
|
|
|
static ssize_t rtpm_usagecount_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t rtpm_children_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n", dev->power.ignore_children ?
|
|
|
|
0 : atomic_read(&dev->power.child_count));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t rtpm_enabled_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
if ((dev->power.disable_depth) && (dev->power.runtime_auto == false))
|
|
|
|
return sprintf(buf, "disabled & forbidden\n");
|
|
|
|
else if (dev->power.disable_depth)
|
|
|
|
return sprintf(buf, "disabled\n");
|
|
|
|
else if (dev->power.runtime_auto == false)
|
|
|
|
return sprintf(buf, "forbidden\n");
|
|
|
|
return sprintf(buf, "enabled\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
|
|
|
|
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
|
|
|
|
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
|
|
|
|
|
2012-07-11 20:42:58 +00:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
2010-01-23 21:25:23 +00:00
|
|
|
static ssize_t async_show(struct device *dev, struct device_attribute *attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%s\n",
|
2014-09-08 06:48:00 +00:00
|
|
|
device_async_suspend_enabled(dev) ?
|
|
|
|
_enabled : _disabled);
|
2010-01-23 21:25:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t async_store(struct device *dev, struct device_attribute *attr,
|
|
|
|
const char *buf, size_t n)
|
|
|
|
{
|
|
|
|
char *cp;
|
|
|
|
int len = n;
|
|
|
|
|
|
|
|
cp = memchr(buf, '\n', n);
|
|
|
|
if (cp)
|
|
|
|
len = cp - buf;
|
2014-09-08 06:48:00 +00:00
|
|
|
if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0)
|
2010-01-23 21:25:23 +00:00
|
|
|
device_enable_async_suspend(dev);
|
2014-09-08 06:48:00 +00:00
|
|
|
else if (len == sizeof _disabled - 1 &&
|
|
|
|
strncmp(buf, _disabled, len) == 0)
|
2010-01-23 21:25:23 +00:00
|
|
|
device_disable_async_suspend(dev);
|
|
|
|
else
|
|
|
|
return -EINVAL;
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static DEVICE_ATTR(async, 0644, async_show, async_store);
|
2012-07-11 20:42:58 +00:00
|
|
|
|
2014-11-27 21:38:05 +00:00
|
|
|
#endif /* CONFIG_PM_SLEEP */
|
2010-04-23 18:32:23 +00:00
|
|
|
#endif /* CONFIG_PM_ADVANCED_DEBUG */
|
2005-09-13 02:39:34 +00:00
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
static struct attribute *power_attrs[] = {
|
2010-04-23 18:32:23 +00:00
|
|
|
#ifdef CONFIG_PM_ADVANCED_DEBUG
|
2011-02-08 22:26:02 +00:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
2010-01-23 21:25:23 +00:00
|
|
|
&dev_attr_async.attr,
|
2011-02-08 22:26:02 +00:00
|
|
|
#endif
|
2010-09-25 21:35:15 +00:00
|
|
|
&dev_attr_runtime_status.attr,
|
2010-04-23 18:32:23 +00:00
|
|
|
&dev_attr_runtime_usage.attr,
|
|
|
|
&dev_attr_runtime_active_kids.attr,
|
|
|
|
&dev_attr_runtime_enabled.attr,
|
2011-02-08 22:26:02 +00:00
|
|
|
#endif /* CONFIG_PM_ADVANCED_DEBUG */
|
2005-04-16 22:20:36 +00:00
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static struct attribute_group pm_attr_group = {
|
2010-09-25 21:35:15 +00:00
|
|
|
.name = power_group_name,
|
2005-04-16 22:20:36 +00:00
|
|
|
.attrs = power_attrs,
|
|
|
|
};
|
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
static struct attribute *wakeup_attrs[] = {
|
|
|
|
#ifdef CONFIG_PM_SLEEP
|
|
|
|
&dev_attr_wakeup.attr,
|
|
|
|
&dev_attr_wakeup_count.attr,
|
|
|
|
&dev_attr_wakeup_active_count.attr,
|
2012-04-29 20:52:52 +00:00
|
|
|
&dev_attr_wakeup_abort_count.attr,
|
|
|
|
&dev_attr_wakeup_expire_count.attr,
|
2011-02-08 22:26:02 +00:00
|
|
|
&dev_attr_wakeup_active.attr,
|
|
|
|
&dev_attr_wakeup_total_time_ms.attr,
|
|
|
|
&dev_attr_wakeup_max_time_ms.attr,
|
|
|
|
&dev_attr_wakeup_last_time_ms.attr,
|
2012-04-29 20:53:32 +00:00
|
|
|
#ifdef CONFIG_PM_AUTOSLEEP
|
|
|
|
&dev_attr_wakeup_prevent_sleep_time_ms.attr,
|
|
|
|
#endif
|
2011-02-08 22:26:02 +00:00
|
|
|
#endif
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static struct attribute_group pm_wakeup_attr_group = {
|
|
|
|
.name = power_group_name,
|
|
|
|
.attrs = wakeup_attrs,
|
|
|
|
};
|
2010-09-25 21:35:15 +00:00
|
|
|
|
|
|
|
static struct attribute *runtime_attrs[] = {
|
|
|
|
#ifndef CONFIG_PM_ADVANCED_DEBUG
|
|
|
|
&dev_attr_runtime_status.attr,
|
|
|
|
#endif
|
|
|
|
&dev_attr_control.attr,
|
|
|
|
&dev_attr_runtime_suspended_time.attr,
|
|
|
|
&dev_attr_runtime_active_time.attr,
|
2010-09-25 21:35:21 +00:00
|
|
|
&dev_attr_autosuspend_delay_ms.attr,
|
2010-09-25 21:35:15 +00:00
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static struct attribute_group pm_runtime_attr_group = {
|
|
|
|
.name = power_group_name,
|
|
|
|
.attrs = runtime_attrs,
|
|
|
|
};
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
static struct attribute *pm_qos_resume_latency_attrs[] = {
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
&dev_attr_pm_qos_resume_latency_us.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
2014-02-10 23:35:23 +00:00
|
|
|
static struct attribute_group pm_qos_resume_latency_attr_group = {
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
.name = power_group_name,
|
2014-02-10 23:35:23 +00:00
|
|
|
.attrs = pm_qos_resume_latency_attrs,
|
2012-10-24 00:08:18 +00:00
|
|
|
};
|
|
|
|
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
static struct attribute *pm_qos_latency_tolerance_attrs[] = {
|
|
|
|
&dev_attr_pm_qos_latency_tolerance_us.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static struct attribute_group pm_qos_latency_tolerance_attr_group = {
|
|
|
|
.name = power_group_name,
|
|
|
|
.attrs = pm_qos_latency_tolerance_attrs,
|
|
|
|
};
|
|
|
|
|
2012-10-24 00:08:18 +00:00
|
|
|
static struct attribute *pm_qos_flags_attrs[] = {
|
|
|
|
&dev_attr_pm_qos_no_power_off.attr,
|
|
|
|
&dev_attr_pm_qos_remote_wakeup.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static struct attribute_group pm_qos_flags_attr_group = {
|
|
|
|
.name = power_group_name,
|
|
|
|
.attrs = pm_qos_flags_attrs,
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
};
|
|
|
|
|
2010-09-25 21:35:15 +00:00
|
|
|
int dpm_sysfs_add(struct device *dev)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = sysfs_create_group(&dev->kobj, &pm_attr_group);
|
2011-02-08 22:26:02 +00:00
|
|
|
if (rc)
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
if (pm_runtime_callbacks_present(dev)) {
|
2010-09-25 21:35:15 +00:00
|
|
|
rc = sysfs_merge_group(&dev->kobj, &pm_runtime_attr_group);
|
|
|
|
if (rc)
|
2011-02-08 22:26:02 +00:00
|
|
|
goto err_out;
|
|
|
|
}
|
|
|
|
if (device_can_wakeup(dev)) {
|
|
|
|
rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
if (rc)
|
|
|
|
goto err_runtime;
|
|
|
|
}
|
|
|
|
if (dev->power.set_latency_tolerance) {
|
|
|
|
rc = sysfs_merge_group(&dev->kobj,
|
|
|
|
&pm_qos_latency_tolerance_attr_group);
|
|
|
|
if (rc)
|
|
|
|
goto err_wakeup;
|
2010-09-25 21:35:15 +00:00
|
|
|
}
|
2011-02-08 22:26:02 +00:00
|
|
|
return 0;
|
|
|
|
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
err_wakeup:
|
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
|
|
|
err_runtime:
|
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
|
2011-02-08 22:26:02 +00:00
|
|
|
err_out:
|
|
|
|
sysfs_remove_group(&dev->kobj, &pm_attr_group);
|
2010-09-25 21:35:15 +00:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
int wakeup_sysfs_add(struct device *dev)
|
2010-09-25 21:35:15 +00:00
|
|
|
{
|
2011-02-08 22:26:02 +00:00
|
|
|
return sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
|
2010-09-25 21:35:15 +00:00
|
|
|
}
|
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
void wakeup_sysfs_remove(struct device *dev)
|
2010-09-25 21:35:15 +00:00
|
|
|
{
|
2011-02-08 22:26:02 +00:00
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
2010-09-25 21:35:15 +00:00
|
|
|
}
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
int pm_qos_sysfs_add_resume_latency(struct device *dev)
|
2012-10-24 00:08:18 +00:00
|
|
|
{
|
2014-02-10 23:35:23 +00:00
|
|
|
return sysfs_merge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
|
2012-10-24 00:08:18 +00:00
|
|
|
}
|
|
|
|
|
2014-02-10 23:35:23 +00:00
|
|
|
void pm_qos_sysfs_remove_resume_latency(struct device *dev)
|
2012-10-24 00:08:18 +00:00
|
|
|
{
|
2014-02-10 23:35:23 +00:00
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
|
2012-10-24 00:08:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int pm_qos_sysfs_add_flags(struct device *dev)
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
{
|
2012-10-24 00:08:18 +00:00
|
|
|
return sysfs_merge_group(&dev->kobj, &pm_qos_flags_attr_group);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
}
|
|
|
|
|
2012-10-24 00:08:18 +00:00
|
|
|
void pm_qos_sysfs_remove_flags(struct device *dev)
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
{
|
2012-10-24 00:08:18 +00:00
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_qos_flags_attr_group);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 00:01:39 +00:00
|
|
|
}
|
|
|
|
|
2015-07-27 15:03:56 +00:00
|
|
|
int pm_qos_sysfs_add_latency_tolerance(struct device *dev)
|
|
|
|
{
|
|
|
|
return sysfs_merge_group(&dev->kobj,
|
|
|
|
&pm_qos_latency_tolerance_attr_group);
|
|
|
|
}
|
|
|
|
|
|
|
|
void pm_qos_sysfs_remove_latency_tolerance(struct device *dev)
|
|
|
|
{
|
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
|
|
|
|
}
|
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
void rpm_sysfs_remove(struct device *dev)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2011-02-08 22:26:02 +00:00
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2011-02-08 22:26:02 +00:00
|
|
|
void dpm_sysfs_remove(struct device *dev)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
PM / QoS: Introcuce latency tolerance device PM QoS type
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-10 23:35:38 +00:00
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
|
2013-03-04 13:22:57 +00:00
|
|
|
dev_pm_qos_constraints_destroy(dev);
|
2011-02-08 22:26:02 +00:00
|
|
|
rpm_sysfs_remove(dev);
|
|
|
|
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
2005-04-16 22:20:36 +00:00
|
|
|
sysfs_remove_group(&dev->kobj, &pm_attr_group);
|
|
|
|
}
|