License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/suspend.h>
|
2007-10-25 23:05:05 +00:00
|
|
|
#include <linux/suspend_ioctls.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/utsname.h>
|
2008-01-16 04:17:00 +00:00
|
|
|
#include <linux/freezer.h>
|
2014-04-07 22:39:20 +00:00
|
|
|
#include <linux/compiler.h>
|
PM: sleep: Pause cpuidle later and resume it earlier during system transitions
Commit 8651f97bd951 ("PM / cpuidle: System resume hang fix with
cpuidle") that introduced cpuidle pausing during system suspend
did that to work around a platform firmware issue causing systems
to hang during resume if CPUs were allowed to enter idle states
in the system suspend and resume code paths.
However, pausing cpuidle before the last phase of suspending
devices is the source of an otherwise arbitrary difference between
the suspend-to-idle path and other system suspend variants, so it is
cleaner to do that later, before taking secondary CPUs offline (it
is still safer to take secondary CPUs offline with cpuidle paused,
though).
Modify the code accordingly, but in order to avoid code duplication,
introduce new wrapper functions, pm_sleep_disable_secondary_cpus()
and pm_sleep_enable_secondary_cpus(), to combine cpuidle_pause()
and cpuidle_resume(), respectively, with the handling of secondary
CPUs during system-wide transitions to sleep states.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-22 16:07:47 +00:00
|
|
|
#include <linux/cpu.h>
|
|
|
|
#include <linux/cpuidle.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
struct swsusp_info {
|
|
|
|
struct new_utsname uts;
|
|
|
|
u32 version_code;
|
|
|
|
unsigned long num_physpages;
|
|
|
|
int cpus;
|
|
|
|
unsigned long image_pages;
|
2006-01-06 08:13:05 +00:00
|
|
|
unsigned long pages;
|
2006-03-23 11:00:03 +00:00
|
|
|
unsigned long size;
|
2014-04-07 22:39:20 +00:00
|
|
|
} __aligned(PAGE_SIZE);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-18 10:04:52 +00:00
|
|
|
#ifdef CONFIG_HIBERNATION
|
2010-09-20 17:44:56 +00:00
|
|
|
/* kernel/power/snapshot.c */
|
2011-05-15 09:38:48 +00:00
|
|
|
extern void __init hibernate_reserved_size_init(void);
|
2010-09-20 17:44:56 +00:00
|
|
|
extern void __init hibernate_image_size_init(void);
|
|
|
|
|
2007-10-18 10:04:52 +00:00
|
|
|
#ifdef CONFIG_ARCH_HIBERNATION_HEADER
|
|
|
|
/* Maximum size of architecture specific data in a hibernation header */
|
|
|
|
#define MAX_ARCH_HEADER_SIZE (sizeof(struct new_utsname) + 4)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-18 10:04:52 +00:00
|
|
|
extern int arch_hibernation_header_save(void *addr, unsigned int max_size);
|
|
|
|
extern int arch_hibernation_header_restore(void *addr);
|
|
|
|
|
|
|
|
static inline int init_header_complete(struct swsusp_info *info)
|
|
|
|
{
|
|
|
|
return arch_hibernation_header_save(info, MAX_ARCH_HEADER_SIZE);
|
|
|
|
}
|
|
|
|
|
2020-07-10 19:12:10 +00:00
|
|
|
static inline const char *check_image_kernel(struct swsusp_info *info)
|
2007-10-18 10:04:52 +00:00
|
|
|
{
|
|
|
|
return arch_hibernation_header_restore(info) ?
|
|
|
|
"architecture specific data" : NULL;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_ARCH_HIBERNATION_HEADER */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
x86 / hibernate: Use hlt_play_dead() when resuming from hibernation
On Intel hardware, native_play_dead() uses mwait_play_dead() by
default and only falls back to the other methods if that fails.
That also happens during resume from hibernation, when the restore
(boot) kernel runs disable_nonboot_cpus() to take all of the CPUs
except for the boot one offline.
However, that is problematic, because the address passed to
__monitor() in mwait_play_dead() is likely to be written to in the
last phase of hibernate image restoration and that causes the "dead"
CPU to start executing instructions again. Unfortunately, the page
containing the address in that CPU's instruction pointer may not be
valid any more at that point.
First, that page may have been overwritten with image kernel memory
contents already, so the instructions the CPU attempts to execute may
simply be invalid. Second, the page tables previously used by that
CPU may have been overwritten by image kernel memory contents, so the
address in its instruction pointer is impossible to resolve then.
A report from Varun Koyyalagunta and investigation carried out by
Chen Yu show that the latter sometimes happens in practice.
To prevent it from happening, temporarily change the smp_ops.play_dead
pointer during resume from hibernation so that it points to a special
"play dead" routine which uses hlt_play_dead() and avoids the
inadvertent "revivals" of "dead" CPUs this way.
A slightly unpleasant consequence of this change is that if the
system is hibernated with one or more CPUs offline, it will generally
draw more power after resume than it did before hibernation, because
the physical state entered by CPUs via hlt_play_dead() is higher-power
than the mwait_play_dead() one in the majority of cases. It is
possible to work around this, but it is unclear how much of a problem
that's going to be in practice, so the workaround will be implemented
later if it turns out to be necessary.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=106371
Reported-by: Varun Koyyalagunta <cpudebug@centtech.com>
Original-by: Chen Yu <yu.c.chen@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
2016-07-14 01:55:23 +00:00
|
|
|
extern int hibernate_resume_nonboot_cpu_disable(void);
|
|
|
|
|
2007-05-06 21:50:52 +00:00
|
|
|
/*
|
|
|
|
* Keep some memory free so that I/O operations can succeed without paging
|
|
|
|
* [Might this be more than 4 MB?]
|
|
|
|
*/
|
|
|
|
#define PAGES_FOR_IO ((4096 * 1024) >> PAGE_SHIFT)
|
2007-10-18 10:04:52 +00:00
|
|
|
|
2007-05-06 21:50:52 +00:00
|
|
|
/*
|
|
|
|
* Keep 1 MB of memory free so that device drivers can allocate some pages in
|
|
|
|
* their .suspend() routines without breaking the suspend to disk.
|
|
|
|
*/
|
|
|
|
#define SPARE_PAGES ((1024 * 1024) >> PAGE_SHIFT)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-02-27 11:43:53 +00:00
|
|
|
asmlinkage int swsusp_save(void);
|
|
|
|
|
2009-06-09 23:27:49 +00:00
|
|
|
/* kernel/power/hibernate.c */
|
2011-12-01 21:33:10 +00:00
|
|
|
extern bool freezer_test_done;
|
|
|
|
|
2007-07-19 08:47:29 +00:00
|
|
|
extern int hibernation_snapshot(int platform_mode);
|
swsusp: introduce restore platform operations
At least on some machines it is necessary to prepare the ACPI firmware for the
restoration of the system memory state from the hibernation image if the
"platform" mode of hibernation has been used. Namely, in that cases we need
to disable the GPEs before replacing the "boot" kernel with the "frozen"
kernel (cf. http://bugzilla.kernel.org/show_bug.cgi?id=7887). After the
restore they will be re-enabled by hibernation_ops->finish(), but if the
restore fails, they have to be re-enabled by the restore code explicitly.
For this purpose we can introduce two additional hibernation operations,
called pre_restore() and restore_cleanup() and call them from the restore code
path. Still, they should be called if the "platform" mode of hibernation has
been used, so we need to pass the information about the hibernation mode from
the "frozen" kernel to the "boot" kernel in the image header.
Apparently, we can't drop the disabling of GPEs before the restore because of
Bug #7887 . We also can't do it unconditionally, because the GPEs wouldn't
have been enabled after a successful restore if the suspend had been done in
the 'shutdown' or 'reboot' mode.
In principle we could (and probably should) unconditionally disable the GPEs
before each snapshot creation *and* before the restore, but then we'd have to
unconditionally enable them after the snapshot creation as well as after the
restore (or restore failure) Still, for this purpose we'd need to modify
acpi_enter_sleep_state_prep() and acpi_leave_sleep_state() and we'd have to
introduce some mechanism synchronizing the disablind/enabling of the GPEs with
the device drivers' .suspend()/.resume() routines and with
disable_/enable_nonboot_cpus(). However, this would have affected the
suspend (ie. s2ram) code as well as the hibernation, which I'd like to avoid
in this patch series.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 08:47:30 +00:00
|
|
|
extern int hibernation_restore(int platform_mode);
|
2007-07-19 08:47:29 +00:00
|
|
|
extern int hibernation_platform_enter(void);
|
2010-09-20 17:44:56 +00:00
|
|
|
|
2017-02-07 00:31:58 +00:00
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
2016-07-10 00:12:10 +00:00
|
|
|
/* kernel/power/snapshot.c */
|
|
|
|
extern void enable_restore_image_protection(void);
|
|
|
|
#else
|
|
|
|
static inline void enable_restore_image_protection(void) {}
|
2017-02-07 00:31:58 +00:00
|
|
|
#endif /* CONFIG_STRICT_KERNEL_RWX */
|
2016-07-10 00:12:10 +00:00
|
|
|
|
2010-09-20 17:44:56 +00:00
|
|
|
#else /* !CONFIG_HIBERNATION */
|
|
|
|
|
2011-05-15 09:38:48 +00:00
|
|
|
static inline void hibernate_reserved_size_init(void) {}
|
2010-09-20 17:44:56 +00:00
|
|
|
static inline void hibernate_image_size_init(void) {}
|
|
|
|
#endif /* !CONFIG_HIBERNATION */
|
2006-12-07 04:34:35 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#define power_attr(_name) \
|
2007-11-02 12:47:53 +00:00
|
|
|
static struct kobj_attribute _name##_attr = { \
|
2005-04-16 22:20:36 +00:00
|
|
|
.attr = { \
|
|
|
|
.name = __stringify(_name), \
|
|
|
|
.mode = 0644, \
|
|
|
|
}, \
|
|
|
|
.show = _name##_show, \
|
|
|
|
.store = _name##_store, \
|
|
|
|
}
|
|
|
|
|
2016-01-02 02:09:16 +00:00
|
|
|
#define power_attr_ro(_name) \
|
|
|
|
static struct kobj_attribute _name##_attr = { \
|
|
|
|
.attr = { \
|
|
|
|
.name = __stringify(_name), \
|
|
|
|
.mode = S_IRUGO, \
|
|
|
|
}, \
|
|
|
|
.show = _name##_show, \
|
|
|
|
}
|
|
|
|
|
2006-02-01 11:05:07 +00:00
|
|
|
/* Preferred image size in bytes (default 500 MB) */
|
|
|
|
extern unsigned long image_size;
|
2011-05-15 09:38:48 +00:00
|
|
|
/* Size of memory reserved for drivers (default SPARE_PAGES x PAGE_SIZE) */
|
|
|
|
extern unsigned long reserved_size;
|
2006-03-23 10:59:59 +00:00
|
|
|
extern int in_suspend;
|
2006-03-23 11:00:00 +00:00
|
|
|
extern dev_t swsusp_resume_device;
|
2006-12-07 04:34:12 +00:00
|
|
|
extern sector_t swsusp_resume_block;
|
2006-03-23 10:59:59 +00:00
|
|
|
|
2007-05-06 21:50:43 +00:00
|
|
|
extern int create_basic_memory_bitmaps(void);
|
|
|
|
extern void free_basic_memory_bitmaps(void);
|
2009-07-08 11:24:05 +00:00
|
|
|
extern int hibernate_preallocate_memory(void);
|
2006-03-23 10:59:59 +00:00
|
|
|
|
2020-12-15 03:13:38 +00:00
|
|
|
extern void clear_or_poison_free_pages(void);
|
2016-09-09 08:43:32 +00:00
|
|
|
|
2006-09-26 06:32:46 +00:00
|
|
|
/**
|
|
|
|
* Auxiliary structure used for reading the snapshot image data and
|
|
|
|
* metadata from and writing them to the list of page backup entries
|
|
|
|
* (PBEs) which is the main data structure of swsusp.
|
|
|
|
*
|
|
|
|
* Using struct snapshot_handle we can transfer the image, including its
|
|
|
|
* metadata, as a continuous sequence of bytes with the help of
|
|
|
|
* snapshot_read_next() and snapshot_write_next().
|
|
|
|
*
|
|
|
|
* The code that writes the image to a storage or transfers it to
|
|
|
|
* the user land is required to use snapshot_read_next() for this
|
|
|
|
* purpose and it should not make any assumptions regarding the internal
|
|
|
|
* structure of the image. Similarly, the code that reads the image from
|
|
|
|
* a storage or transfers it from the user land is required to use
|
|
|
|
* snapshot_write_next().
|
|
|
|
*
|
|
|
|
* This may allow us to change the internal structure of the image
|
|
|
|
* in the future with considerably less effort.
|
|
|
|
*/
|
|
|
|
|
2006-03-23 10:59:59 +00:00
|
|
|
struct snapshot_handle {
|
2006-09-26 06:32:46 +00:00
|
|
|
unsigned int cur; /* number of the block of PAGE_SIZE bytes the
|
|
|
|
* next operation will refer to (ie. current)
|
|
|
|
*/
|
|
|
|
void *buffer; /* address of the block to read from
|
|
|
|
* or write to
|
|
|
|
*/
|
|
|
|
int sync_read; /* Set to one to notify the caller of
|
|
|
|
* snapshot_write_next() that it may
|
|
|
|
* need to call wait_on_bio_chain()
|
|
|
|
*/
|
2006-03-23 10:59:59 +00:00
|
|
|
};
|
|
|
|
|
2006-09-26 06:32:46 +00:00
|
|
|
/* This macro returns the address from/to which the caller of
|
|
|
|
* snapshot_read_next()/snapshot_write_next() is allowed to
|
|
|
|
* read/write data after the function returns
|
|
|
|
*/
|
2010-05-01 21:52:02 +00:00
|
|
|
#define data_of(handle) ((handle).buffer)
|
2006-03-23 10:59:59 +00:00
|
|
|
|
2006-09-26 06:32:54 +00:00
|
|
|
extern unsigned int snapshot_additional_pages(struct zone *zone);
|
2007-10-25 22:59:31 +00:00
|
|
|
extern unsigned long snapshot_get_image_size(void);
|
2010-05-01 21:52:02 +00:00
|
|
|
extern int snapshot_read_next(struct snapshot_handle *handle);
|
|
|
|
extern int snapshot_write_next(struct snapshot_handle *handle);
|
[PATCH] swsusp: Improve handling of highmem
Currently swsusp saves the contents of highmem pages by copying them to the
normal zone which is quite inefficient (eg. it requires two normal pages
to be used for saving one highmem page). This may be improved by using
highmem for saving the contents of saveable highmem pages.
Namely, during the suspend phase of the suspend-resume cycle we try to
allocate as many free highmem pages as there are saveable highmem pages.
If there are not enough highmem image pages to store the contents of all of
the saveable highmem pages, some of them will be stored in the "normal"
memory. Next, we allocate as many free "normal" pages as needed to store
the (remaining) image data. We use a memory bitmap to mark the allocated
free pages (ie. highmem as well as "normal" image pages).
Now, we use another memory bitmap to mark all of the saveable pages
(highmem as well as "normal") and the contents of the saveable pages are
copied into the image pages. Then, the second bitmap is used to save the
pfns corresponding to the saveable pages and the first one is used to save
their data.
During the resume phase the pfns of the pages that were saveable during the
suspend are loaded from the image and used to mark the "unsafe" page
frames. Next, we try to allocate as many free highmem page frames as to
load all of the image data that had been in the highmem before the suspend
and we allocate so many free "normal" page frames that the total number of
allocated free pages (highmem and "normal") is equal to the size of the
image. While doing this we have to make sure that there will be some extra
free "normal" and "safe" page frames for two lists of PBEs constructed
later.
Now, the image data are loaded, if possible, into their "original" page
frames. The image data that cannot be written into their "original" page
frames are loaded into "safe" page frames and their "original" kernel
virtual addresses, as well as the addresses of the "safe" pages containing
their copies, are stored in one of two lists of PBEs.
One list of PBEs is for the copies of "normal" suspend pages (ie. "normal"
pages that were saveable during the suspend) and it is used in the same way
as previously (ie. by the architecture-dependent parts of swsusp). The
other list of PBEs is for the copies of highmem suspend pages. The pages
in this list are restored (in a reversible way) right before the
arch-dependent code is called.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 04:34:18 +00:00
|
|
|
extern void snapshot_write_finalize(struct snapshot_handle *handle);
|
2006-09-26 06:32:54 +00:00
|
|
|
extern int snapshot_image_loaded(struct snapshot_handle *handle);
|
2006-03-23 11:00:00 +00:00
|
|
|
|
2020-05-07 07:19:52 +00:00
|
|
|
extern bool hibernate_acquire(void);
|
|
|
|
extern void hibernate_release(void);
|
2007-05-06 21:50:45 +00:00
|
|
|
|
2007-05-06 21:50:47 +00:00
|
|
|
extern sector_t alloc_swapdev_block(int swap);
|
|
|
|
extern void free_all_swap_pages(int swap);
|
|
|
|
extern int swsusp_swap_in_use(void);
|
2006-03-23 11:00:00 +00:00
|
|
|
|
swsusp: introduce restore platform operations
At least on some machines it is necessary to prepare the ACPI firmware for the
restoration of the system memory state from the hibernation image if the
"platform" mode of hibernation has been used. Namely, in that cases we need
to disable the GPEs before replacing the "boot" kernel with the "frozen"
kernel (cf. http://bugzilla.kernel.org/show_bug.cgi?id=7887). After the
restore they will be re-enabled by hibernation_ops->finish(), but if the
restore fails, they have to be re-enabled by the restore code explicitly.
For this purpose we can introduce two additional hibernation operations,
called pre_restore() and restore_cleanup() and call them from the restore code
path. Still, they should be called if the "platform" mode of hibernation has
been used, so we need to pass the information about the hibernation mode from
the "frozen" kernel to the "boot" kernel in the image header.
Apparently, we can't drop the disabling of GPEs before the restore because of
Bug #7887 . We also can't do it unconditionally, because the GPEs wouldn't
have been enabled after a successful restore if the suspend had been done in
the 'shutdown' or 'reboot' mode.
In principle we could (and probably should) unconditionally disable the GPEs
before each snapshot creation *and* before the restore, but then we'd have to
unconditionally enable them after the snapshot creation as well as after the
restore (or restore failure) Still, for this purpose we'd need to modify
acpi_enter_sleep_state_prep() and acpi_leave_sleep_state() and we'd have to
introduce some mechanism synchronizing the disablind/enabling of the GPEs with
the device drivers' .suspend()/.resume() routines and with
disable_/enable_nonboot_cpus(). However, this would have affected the
suspend (ie. s2ram) code as well as the hibernation, which I'd like to avoid
in this patch series.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 08:47:30 +00:00
|
|
|
/*
|
|
|
|
* Flags that can be passed from the hibernatig hernel to the "boot" kernel in
|
|
|
|
* the image header.
|
|
|
|
*/
|
|
|
|
#define SF_PLATFORM_MODE 1
|
2010-09-09 21:06:23 +00:00
|
|
|
#define SF_NOCOMPRESS_MODE 2
|
2011-10-13 21:58:07 +00:00
|
|
|
#define SF_CRC32_MODE 4
|
2021-11-08 16:09:41 +00:00
|
|
|
#define SF_HW_SIG 8
|
swsusp: introduce restore platform operations
At least on some machines it is necessary to prepare the ACPI firmware for the
restoration of the system memory state from the hibernation image if the
"platform" mode of hibernation has been used. Namely, in that cases we need
to disable the GPEs before replacing the "boot" kernel with the "frozen"
kernel (cf. http://bugzilla.kernel.org/show_bug.cgi?id=7887). After the
restore they will be re-enabled by hibernation_ops->finish(), but if the
restore fails, they have to be re-enabled by the restore code explicitly.
For this purpose we can introduce two additional hibernation operations,
called pre_restore() and restore_cleanup() and call them from the restore code
path. Still, they should be called if the "platform" mode of hibernation has
been used, so we need to pass the information about the hibernation mode from
the "frozen" kernel to the "boot" kernel in the image header.
Apparently, we can't drop the disabling of GPEs before the restore because of
Bug #7887 . We also can't do it unconditionally, because the GPEs wouldn't
have been enabled after a successful restore if the suspend had been done in
the 'shutdown' or 'reboot' mode.
In principle we could (and probably should) unconditionally disable the GPEs
before each snapshot creation *and* before the restore, but then we'd have to
unconditionally enable them after the snapshot creation as well as after the
restore (or restore failure) Still, for this purpose we'd need to modify
acpi_enter_sleep_state_prep() and acpi_leave_sleep_state() and we'd have to
introduce some mechanism synchronizing the disablind/enabling of the GPEs with
the device drivers' .suspend()/.resume() routines and with
disable_/enable_nonboot_cpus(). However, this would have affected the
suspend (ie. s2ram) code as well as the hibernation, which I'd like to avoid
in this patch series.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 08:47:30 +00:00
|
|
|
|
2009-06-09 23:27:49 +00:00
|
|
|
/* kernel/power/hibernate.c */
|
2006-03-23 11:00:01 +00:00
|
|
|
extern int swsusp_check(void);
|
|
|
|
extern void swsusp_free(void);
|
swsusp: introduce restore platform operations
At least on some machines it is necessary to prepare the ACPI firmware for the
restoration of the system memory state from the hibernation image if the
"platform" mode of hibernation has been used. Namely, in that cases we need
to disable the GPEs before replacing the "boot" kernel with the "frozen"
kernel (cf. http://bugzilla.kernel.org/show_bug.cgi?id=7887). After the
restore they will be re-enabled by hibernation_ops->finish(), but if the
restore fails, they have to be re-enabled by the restore code explicitly.
For this purpose we can introduce two additional hibernation operations,
called pre_restore() and restore_cleanup() and call them from the restore code
path. Still, they should be called if the "platform" mode of hibernation has
been used, so we need to pass the information about the hibernation mode from
the "frozen" kernel to the "boot" kernel in the image header.
Apparently, we can't drop the disabling of GPEs before the restore because of
Bug #7887 . We also can't do it unconditionally, because the GPEs wouldn't
have been enabled after a successful restore if the suspend had been done in
the 'shutdown' or 'reboot' mode.
In principle we could (and probably should) unconditionally disable the GPEs
before each snapshot creation *and* before the restore, but then we'd have to
unconditionally enable them after the snapshot creation as well as after the
restore (or restore failure) Still, for this purpose we'd need to modify
acpi_enter_sleep_state_prep() and acpi_leave_sleep_state() and we'd have to
introduce some mechanism synchronizing the disablind/enabling of the GPEs with
the device drivers' .suspend()/.resume() routines and with
disable_/enable_nonboot_cpus(). However, this would have affected the
suspend (ie. s2ram) code as well as the hibernation, which I'd like to avoid
in this patch series.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 08:47:30 +00:00
|
|
|
extern int swsusp_read(unsigned int *flags_p);
|
|
|
|
extern int swsusp_write(unsigned int flags);
|
2007-10-08 17:21:10 +00:00
|
|
|
extern void swsusp_close(fmode_t);
|
2012-06-15 22:09:58 +00:00
|
|
|
#ifdef CONFIG_SUSPEND
|
|
|
|
extern int swsusp_unmark(void);
|
|
|
|
#endif
|
2006-12-07 04:34:32 +00:00
|
|
|
|
2019-10-25 20:56:17 +00:00
|
|
|
struct __kernel_old_timeval;
|
2007-07-19 08:47:36 +00:00
|
|
|
/* kernel/power/swsusp.c */
|
2014-10-30 18:04:53 +00:00
|
|
|
extern void swsusp_show_speed(ktime_t, ktime_t, unsigned int, char *);
|
2007-07-19 08:47:36 +00:00
|
|
|
|
2007-07-29 21:27:18 +00:00
|
|
|
#ifdef CONFIG_SUSPEND
|
2009-06-09 23:27:12 +00:00
|
|
|
/* kernel/power/suspend.c */
|
PM / sleep: System sleep state selection interface rework
There are systems in which the platform doesn't support any special
sleep states, so suspend-to-idle (PM_SUSPEND_FREEZE) is the only
available system sleep state. However, some user space frameworks
only use the "mem" and (sometimes) "standby" sleep state labels, so
the users of those systems need to modify user space in order to be
able to use system suspend at all and that may be a pain in practice.
Commit 0399d4db3edf (PM / sleep: Introduce command line argument for
sleep state enumeration) attempted to address this problem by adding
a command line argument to change the meaning of the "mem" string in
/sys/power/state to make it trigger suspend-to-idle (instead of
suspend-to-RAM).
However, there also are systems in which the platform does support
special sleep states, but suspend-to-idle is the preferred one anyway
(it even may save more energy than the platform-provided sleep states
in some cases) and the above commit doesn't help in those cases.
For this reason, rework the system sleep state selection interface
again (but preserve backwards compatibiliby). Namely, add a new
sysfs file, /sys/power/mem_sleep, that will control the system
suspend mode triggered by writing "mem" to /sys/power/state (in
analogy with what /sys/power/disk does for hibernation). Make it
select suspend-to-RAM ("deep" sleep) by default (if supported) and
fall back to suspend-to-idle ("s2idle") otherwise and add a new
command line argument, mem_sleep_default, allowing that default to
be overridden if need be.
At the same time, drop the relative_sleep_states command line
argument that doesn't make sense any more.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Mario Limonciello <mario.limonciello@dell.com>
2016-11-21 21:45:40 +00:00
|
|
|
extern const char * const pm_labels[];
|
2014-07-15 20:02:11 +00:00
|
|
|
extern const char *pm_states[];
|
PM / sleep: System sleep state selection interface rework
There are systems in which the platform doesn't support any special
sleep states, so suspend-to-idle (PM_SUSPEND_FREEZE) is the only
available system sleep state. However, some user space frameworks
only use the "mem" and (sometimes) "standby" sleep state labels, so
the users of those systems need to modify user space in order to be
able to use system suspend at all and that may be a pain in practice.
Commit 0399d4db3edf (PM / sleep: Introduce command line argument for
sleep state enumeration) attempted to address this problem by adding
a command line argument to change the meaning of the "mem" string in
/sys/power/state to make it trigger suspend-to-idle (instead of
suspend-to-RAM).
However, there also are systems in which the platform does support
special sleep states, but suspend-to-idle is the preferred one anyway
(it even may save more energy than the platform-provided sleep states
in some cases) and the above commit doesn't help in those cases.
For this reason, rework the system sleep state selection interface
again (but preserve backwards compatibiliby). Namely, add a new
sysfs file, /sys/power/mem_sleep, that will control the system
suspend mode triggered by writing "mem" to /sys/power/state (in
analogy with what /sys/power/disk does for hibernation). Make it
select suspend-to-RAM ("deep" sleep) by default (if supported) and
fall back to suspend-to-idle ("s2idle") otherwise and add a new
command line argument, mem_sleep_default, allowing that default to
be overridden if need be.
At the same time, drop the relative_sleep_states command line
argument that doesn't make sense any more.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Mario Limonciello <mario.limonciello@dell.com>
2016-11-21 21:45:40 +00:00
|
|
|
extern const char *mem_sleep_states[];
|
2009-06-09 23:27:12 +00:00
|
|
|
|
2007-07-19 08:47:38 +00:00
|
|
|
extern int suspend_devices_and_enter(suspend_state_t state);
|
2007-07-29 21:27:18 +00:00
|
|
|
#else /* !CONFIG_SUSPEND */
|
PM / sleep: System sleep state selection interface rework
There are systems in which the platform doesn't support any special
sleep states, so suspend-to-idle (PM_SUSPEND_FREEZE) is the only
available system sleep state. However, some user space frameworks
only use the "mem" and (sometimes) "standby" sleep state labels, so
the users of those systems need to modify user space in order to be
able to use system suspend at all and that may be a pain in practice.
Commit 0399d4db3edf (PM / sleep: Introduce command line argument for
sleep state enumeration) attempted to address this problem by adding
a command line argument to change the meaning of the "mem" string in
/sys/power/state to make it trigger suspend-to-idle (instead of
suspend-to-RAM).
However, there also are systems in which the platform does support
special sleep states, but suspend-to-idle is the preferred one anyway
(it even may save more energy than the platform-provided sleep states
in some cases) and the above commit doesn't help in those cases.
For this reason, rework the system sleep state selection interface
again (but preserve backwards compatibiliby). Namely, add a new
sysfs file, /sys/power/mem_sleep, that will control the system
suspend mode triggered by writing "mem" to /sys/power/state (in
analogy with what /sys/power/disk does for hibernation). Make it
select suspend-to-RAM ("deep" sleep) by default (if supported) and
fall back to suspend-to-idle ("s2idle") otherwise and add a new
command line argument, mem_sleep_default, allowing that default to
be overridden if need be.
At the same time, drop the relative_sleep_states command line
argument that doesn't make sense any more.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Mario Limonciello <mario.limonciello@dell.com>
2016-11-21 21:45:40 +00:00
|
|
|
#define mem_sleep_current PM_SUSPEND_ON
|
|
|
|
|
2007-07-29 21:27:18 +00:00
|
|
|
static inline int suspend_devices_and_enter(suspend_state_t state)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
#endif /* !CONFIG_SUSPEND */
|
|
|
|
|
2009-06-09 23:27:12 +00:00
|
|
|
#ifdef CONFIG_PM_TEST_SUSPEND
|
|
|
|
/* kernel/power/suspend_test.c */
|
|
|
|
extern void suspend_test_start(void);
|
|
|
|
extern void suspend_test_finish(const char *label);
|
|
|
|
#else /* !CONFIG_PM_TEST_SUSPEND */
|
|
|
|
static inline void suspend_test_start(void) {}
|
|
|
|
static inline void suspend_test_finish(const char *label) {}
|
|
|
|
#endif /* !CONFIG_PM_TEST_SUSPEND */
|
|
|
|
|
2007-11-19 22:49:18 +00:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
|
|
|
/* kernel/power/main.c */
|
notifier: Fix broken error handling pattern
The current notifiers have the following error handling pattern all
over the place:
int err, nr;
err = __foo_notifier_call_chain(&chain, val_up, v, -1, &nr);
if (err & NOTIFIER_STOP_MASK)
__foo_notifier_call_chain(&chain, val_down, v, nr-1, NULL)
And aside from the endless repetition thereof, it is broken. Consider
blocking notifiers; both calls take and drop the rwsem, this means
that the notifier list can change in between the two calls, making @nr
meaningless.
Fix this by replacing all the __foo_notifier_call_chain() functions
with foo_notifier_call_chain_robust() that embeds the above pattern,
but ensures it is inside a single lock region.
Note: I switched atomic_notifier_call_chain_robust() to use
the spinlock, since RCU cannot provide the guarantee
required for the recovery.
Note: software_resume() error handling was broken afaict.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20200818135804.325626653@infradead.org
2020-08-18 13:57:36 +00:00
|
|
|
extern int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down);
|
2007-11-19 22:49:18 +00:00
|
|
|
extern int pm_notifier_call_chain(unsigned long val);
|
|
|
|
#endif
|
2007-11-19 22:36:20 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
int restore_highmem(void);
|
|
|
|
#else
|
|
|
|
static inline unsigned int count_highmem_pages(void) { return 0; }
|
|
|
|
static inline int restore_highmem(void) { return 0; }
|
|
|
|
#endif
|
Suspend: Testing facility (rev. 2)
Introduce sysfs attribute /sys/power/pm_test allowing one to test the suspend
core code. Namely, writing one of the strings:
freezer
devices
platform
processors
core
to this file causes the suspend code to work in one of the test modes defined as
follows:
freezer
- test the freezing of processes
devices
- test the freezing of processes and suspending of devices
platform
- test the freezing of processes, suspending of devices and platform global
control methods(*)
processors
- test the freezing of processes, suspending of devices, platform global
control methods and the disabling of nonboot CPUs
core
- test the freezing of processes, suspending of devices, platform global
control methods, the disabling of nonboot CPUs and suspending of
platform/system devices
(*) These are ACPI global control methods on ACPI systems
Then, if a suspend is started by normal means, the suspend core will perform
its normal operations up to the point indicated by given test level. Next, it
will wait for 5 seconds and carry out the resume operations needed to transition
the system back to the fully functional state.
Writing "none" to /sys/power/pm_test turns the testing off.
When open for reading, /sys/power/pm_test contains a space-separated list of all
available tests (including "none" that represents the normal functionality) in
which the current test level is indicated by square brackets.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Len Brown <len.brown@intel.com>
2007-11-19 22:41:19 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Suspend test levels
|
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
/* keep first */
|
|
|
|
TEST_NONE,
|
|
|
|
TEST_CORE,
|
|
|
|
TEST_CPUS,
|
|
|
|
TEST_PLATFORM,
|
|
|
|
TEST_DEVICES,
|
|
|
|
TEST_FREEZER,
|
|
|
|
/* keep last */
|
|
|
|
__TEST_AFTER_LAST
|
|
|
|
};
|
|
|
|
|
|
|
|
#define TEST_FIRST TEST_NONE
|
|
|
|
#define TEST_MAX (__TEST_AFTER_LAST - 1)
|
2007-11-19 22:42:31 +00:00
|
|
|
|
2017-07-21 12:44:02 +00:00
|
|
|
#ifdef CONFIG_PM_SLEEP_DEBUG
|
2007-11-19 22:42:31 +00:00
|
|
|
extern int pm_test_level;
|
2017-07-21 00:07:54 +00:00
|
|
|
#else
|
|
|
|
#define pm_test_level (TEST_NONE)
|
|
|
|
#endif
|
2008-01-16 04:17:00 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_SUSPEND_FREEZER
|
|
|
|
static inline int suspend_freeze_processes(void)
|
|
|
|
{
|
PM / Freezer: Thaw only kernel threads if freezing of kernel threads fails
If freezing of kernel threads fails, we are expected to automatically
thaw tasks in the error recovery path. However, at times, we encounter
situations in which we would like the automatic error recovery path
to thaw only the kernel threads, because we want to be able to do
some more cleanup before we thaw userspace. Something like:
error = freeze_kernel_threads();
if (error) {
/* Do some cleanup */
/* Only then thaw userspace tasks*/
thaw_processes();
}
An example of such a situation is where we freeze/thaw filesystems
during suspend/hibernation. There, if freezing of kernel threads
fails, we would like to thaw the frozen filesystems before thawing
the userspace tasks.
So, modify freeze_kernel_threads() to thaw only kernel threads in
case of freezing failure. And change suspend_freeze_processes()
accordingly. (At the same time, let us also get rid of the rather
cryptic usage of the conditional operator (:?) in that function.)
[rjw: In fact, this patch fixes a regression introduced during the
3.3 merge window, because without it thaw_processes() may be called
before swsusp_free() in some situations and that may lead to massive
memory allocation failures.]
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Nigel Cunningham <nigel@tuxonice.net>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2012-02-03 21:22:41 +00:00
|
|
|
int error;
|
|
|
|
|
|
|
|
error = freeze_processes();
|
|
|
|
/*
|
|
|
|
* freeze_processes() automatically thaws every task if freezing
|
|
|
|
* fails. So we need not do anything extra upon error.
|
|
|
|
*/
|
|
|
|
if (error)
|
2012-02-11 21:40:23 +00:00
|
|
|
return error;
|
PM / Freezer: Thaw only kernel threads if freezing of kernel threads fails
If freezing of kernel threads fails, we are expected to automatically
thaw tasks in the error recovery path. However, at times, we encounter
situations in which we would like the automatic error recovery path
to thaw only the kernel threads, because we want to be able to do
some more cleanup before we thaw userspace. Something like:
error = freeze_kernel_threads();
if (error) {
/* Do some cleanup */
/* Only then thaw userspace tasks*/
thaw_processes();
}
An example of such a situation is where we freeze/thaw filesystems
during suspend/hibernation. There, if freezing of kernel threads
fails, we would like to thaw the frozen filesystems before thawing
the userspace tasks.
So, modify freeze_kernel_threads() to thaw only kernel threads in
case of freezing failure. And change suspend_freeze_processes()
accordingly. (At the same time, let us also get rid of the rather
cryptic usage of the conditional operator (:?) in that function.)
[rjw: In fact, this patch fixes a regression introduced during the
3.3 merge window, because without it thaw_processes() may be called
before swsusp_free() in some situations and that may lead to massive
memory allocation failures.]
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Nigel Cunningham <nigel@tuxonice.net>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2012-02-03 21:22:41 +00:00
|
|
|
|
|
|
|
error = freeze_kernel_threads();
|
|
|
|
/*
|
|
|
|
* freeze_kernel_threads() thaws only kernel threads upon freezing
|
|
|
|
* failure. So we have to thaw the userspace tasks ourselves.
|
|
|
|
*/
|
|
|
|
if (error)
|
|
|
|
thaw_processes();
|
|
|
|
|
|
|
|
return error;
|
2008-01-16 04:17:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void suspend_thaw_processes(void)
|
|
|
|
{
|
|
|
|
thaw_processes();
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline int suspend_freeze_processes(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void suspend_thaw_processes(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
2012-04-29 20:53:22 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_PM_AUTOSLEEP
|
|
|
|
|
|
|
|
/* kernel/power/autosleep.c */
|
|
|
|
extern int pm_autosleep_init(void);
|
|
|
|
extern int pm_autosleep_lock(void);
|
|
|
|
extern void pm_autosleep_unlock(void);
|
|
|
|
extern suspend_state_t pm_autosleep_state(void);
|
|
|
|
extern int pm_autosleep_set_state(suspend_state_t state);
|
|
|
|
|
|
|
|
#else /* !CONFIG_PM_AUTOSLEEP */
|
|
|
|
|
|
|
|
static inline int pm_autosleep_init(void) { return 0; }
|
|
|
|
static inline int pm_autosleep_lock(void) { return 0; }
|
|
|
|
static inline void pm_autosleep_unlock(void) {}
|
|
|
|
static inline suspend_state_t pm_autosleep_state(void) { return PM_SUSPEND_ON; }
|
|
|
|
|
|
|
|
#endif /* !CONFIG_PM_AUTOSLEEP */
|
PM / Sleep: Add user space interface for manipulating wakeup sources, v3
Android allows user space to manipulate wakelocks using two
sysfs file located in /sys/power/, wake_lock and wake_unlock.
Writing a wakelock name and optionally a timeout to the wake_lock
file causes the wakelock whose name was written to be acquired (it
is created before is necessary), optionally with the given timeout.
Writing the name of a wakelock to wake_unlock causes that wakelock
to be released.
Implement an analogous interface for user space using wakeup sources.
Add the /sys/power/wake_lock and /sys/power/wake_unlock files
allowing user space to create, activate and deactivate wakeup
sources, such that writing a name and optionally a timeout to
wake_lock causes the wakeup source of that name to be activated,
optionally with the given timeout. If that wakeup source doesn't
exist, it will be created and then activated. Writing a name to
wake_unlock causes the wakeup source of that name, if there is one,
to be deactivated. Wakeup sources created with the help of
wake_lock that haven't been used for more than 5 minutes are garbage
collected and destroyed. Moreover, there can be only WL_NUMBER_LIMIT
wakeup sources created with the help of wake_lock present at a time.
The data type used to track wakeup sources created by user space is
called "struct wakelock" to indicate the origins of this feature.
This version of the patch includes an rbtree manipulation fix from John Stultz.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: NeilBrown <neilb@suse.de>
2012-04-29 20:53:42 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_PM_WAKELOCKS
|
|
|
|
|
|
|
|
/* kernel/power/wakelock.c */
|
|
|
|
extern ssize_t pm_show_wakelocks(char *buf, bool show_active);
|
|
|
|
extern int pm_wake_lock(const char *buf);
|
|
|
|
extern int pm_wake_unlock(const char *buf);
|
|
|
|
|
|
|
|
#endif /* !CONFIG_PM_WAKELOCKS */
|
PM: sleep: Pause cpuidle later and resume it earlier during system transitions
Commit 8651f97bd951 ("PM / cpuidle: System resume hang fix with
cpuidle") that introduced cpuidle pausing during system suspend
did that to work around a platform firmware issue causing systems
to hang during resume if CPUs were allowed to enter idle states
in the system suspend and resume code paths.
However, pausing cpuidle before the last phase of suspending
devices is the source of an otherwise arbitrary difference between
the suspend-to-idle path and other system suspend variants, so it is
cleaner to do that later, before taking secondary CPUs offline (it
is still safer to take secondary CPUs offline with cpuidle paused,
though).
Modify the code accordingly, but in order to avoid code duplication,
introduce new wrapper functions, pm_sleep_disable_secondary_cpus()
and pm_sleep_enable_secondary_cpus(), to combine cpuidle_pause()
and cpuidle_resume(), respectively, with the handling of secondary
CPUs during system-wide transitions to sleep states.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-22 16:07:47 +00:00
|
|
|
|
|
|
|
static inline int pm_sleep_disable_secondary_cpus(void)
|
|
|
|
{
|
|
|
|
cpuidle_pause();
|
|
|
|
return suspend_disable_secondary_cpus();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pm_sleep_enable_secondary_cpus(void)
|
|
|
|
{
|
|
|
|
suspend_enable_secondary_cpus();
|
|
|
|
cpuidle_resume();
|
|
|
|
}
|