Mainly singleton patches all over the place. Series of note are:

- updates to scripts/gdb from Glenn Washburn
 
 - kexec cleanups from Bjorn Helgaas
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZEr+6wAKCRDdBJ7gKXxA
 jn4NAP4u/hj/kR2dxYehcVLuQqJspCRZZBZlAReFJyHNQO6voAEAk0NN9rtG2+/E
 r0G29CJhK+YL0W6mOs8O1yo9J1rZnAM=
 =2CUV
 -----END PGP SIGNATURE-----

Merge tag 'mm-nonmm-stable-2023-04-27-16-01' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:
 "Mainly singleton patches all over the place.

  Series of note are:

   - updates to scripts/gdb from Glenn Washburn

   - kexec cleanups from Bjorn Helgaas"

* tag 'mm-nonmm-stable-2023-04-27-16-01' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (50 commits)
  mailmap: add entries for Paul Mackerras
  libgcc: add forward declarations for generic library routines
  mailmap: add entry for Oleksandr
  ocfs2: reduce ioctl stack usage
  fs/proc: add Kthread flag to /proc/$pid/status
  ia64: fix an addr to taddr in huge_pte_offset()
  checkpatch: introduce proper bindings license check
  epoll: rename global epmutex
  scripts/gdb: add GDB convenience functions $lx_dentry_name() and $lx_i_dentry()
  scripts/gdb: create linux/vfs.py for VFS related GDB helpers
  uapi/linux/const.h: prefer ISO-friendly __typeof__
  delayacct: track delays from IRQ/SOFTIRQ
  scripts/gdb: timerlist: convert int chunks to str
  scripts/gdb: print interrupts
  scripts/gdb: raise error with reduced debugging information
  scripts/gdb: add a Radix Tree Parser
  lib/rbtree: use '+' instead of '|' for setting color.
  proc/stat: remove arch_idle_time()
  checkpatch: check for misuse of the link tags
  checkpatch: allow Closes tags with links
  ...
This commit is contained in:
Linus Torvalds 2023-04-27 19:57:00 -07:00
commit 33afd4b763
68 changed files with 1028 additions and 373 deletions

View File

@ -360,6 +360,7 @@ Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>
Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.de> Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.de>
Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.com> Nicolas Saenz Julienne <nsaenz@kernel.org> <nsaenzjulienne@suse.com>
Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Oleksandr Natalenko <oleksandr@natalenko.name> <oleksandr@redhat.com>
Oleksij Rempel <linux@rempel-privat.de> <bug-track@fisher-privat.net> Oleksij Rempel <linux@rempel-privat.de> <bug-track@fisher-privat.net>
Oleksij Rempel <linux@rempel-privat.de> <external.Oleksij.Rempel@de.bosch.com> Oleksij Rempel <linux@rempel-privat.de> <external.Oleksij.Rempel@de.bosch.com>
Oleksij Rempel <linux@rempel-privat.de> <fixed-term.Oleksij.Rempel@de.bosch.com> Oleksij Rempel <linux@rempel-privat.de> <fixed-term.Oleksij.Rempel@de.bosch.com>
@ -375,6 +376,8 @@ Paul E. McKenney <paulmck@kernel.org> <paul.mckenney@linaro.org>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.ibm.com> Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.vnet.ibm.com> Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.vnet.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@us.ibm.com> Paul E. McKenney <paulmck@kernel.org> <paulmck@us.ibm.com>
Paul Mackerras <paulus@ozlabs.org> <paulus@samba.org>
Paul Mackerras <paulus@ozlabs.org> <paulus@au1.ibm.com>
Peter A Jonsson <pj@ludd.ltu.se> Peter A Jonsson <pj@ludd.ltu.se>
Peter Oruba <peter.oruba@amd.com> Peter Oruba <peter.oruba@amd.com>
Peter Oruba <peter@oruba.de> Peter Oruba <peter@oruba.de>

View File

@ -16,6 +16,7 @@ d) memory reclaim
e) thrashing e) thrashing
f) direct compact f) direct compact
g) write-protect copy g) write-protect copy
h) IRQ/SOFTIRQ
and makes these statistics available to userspace through and makes these statistics available to userspace through
the taskstats interface. the taskstats interface.
@ -49,7 +50,7 @@ this structure. See
for a description of the fields pertaining to delay accounting. for a description of the fields pertaining to delay accounting.
It will generally be in the form of counters returning the cumulative It will generally be in the form of counters returning the cumulative
delay seen for cpu, sync block I/O, swapin, memory reclaim, thrash page delay seen for cpu, sync block I/O, swapin, memory reclaim, thrash page
cache, direct compact, write-protect copy etc. cache, direct compact, write-protect copy, IRQ/SOFTIRQ etc.
Taking the difference of two successive readings of a given Taking the difference of two successive readings of a given
counter (say cpu_delay_total) for a task will give the delay counter (say cpu_delay_total) for a task will give the delay
@ -109,17 +110,19 @@ Get sum of delays, since system boot, for all pids with tgid 5::
CPU count real total virtual total delay total delay average CPU count real total virtual total delay total delay average
8 7000000 6872122 3382277 0.423ms 8 7000000 6872122 3382277 0.423ms
IO count delay total delay average IO count delay total delay average
0 0 0ms 0 0 0.000ms
SWAP count delay total delay average SWAP count delay total delay average
0 0 0ms 0 0 0.000ms
RECLAIM count delay total delay average RECLAIM count delay total delay average
0 0 0ms 0 0 0.000ms
THRASHING count delay total delay average THRASHING count delay total delay average
0 0 0ms 0 0 0.000ms
COMPACT count delay total delay average COMPACT count delay total delay average
0 0 0ms 0 0 0.000ms
WPCOPY count delay total delay average WPCOPY count delay total delay average
0 0 0ms 0 0 0.000ms
IRQ count delay total delay average
0 0 0.000ms
Get IO accounting for pid 1, it works only with -p:: Get IO accounting for pid 1, it works only with -p::

View File

@ -1,42 +1,50 @@
kcov: code coverage for fuzzing KCOV: code coverage for fuzzing
=============================== ===============================
kcov exposes kernel code coverage information in a form suitable for coverage- KCOV collects and exposes kernel code coverage information in a form suitable
guided fuzzing (randomized testing). Coverage data of a running kernel is for coverage-guided fuzzing. Coverage data of a running kernel is exported via
exported via the "kcov" debugfs file. Coverage collection is enabled on a task the ``kcov`` debugfs file. Coverage collection is enabled on a task basis, and
basis, and thus it can capture precise coverage of a single system call. thus KCOV can capture precise coverage of a single system call.
Note that kcov does not aim to collect as much coverage as possible. It aims Note that KCOV does not aim to collect as much coverage as possible. It aims
to collect more or less stable coverage that is function of syscall inputs. to collect more or less stable coverage that is a function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard interrupts To achieve this goal, it does not collect coverage in soft/hard interrupts
and instrumentation of some inherently non-deterministic parts of kernel is (unless remove coverage collection is enabled, see below) and from some
disabled (e.g. scheduler, locking). inherently non-deterministic parts of the kernel (e.g. scheduler, locking).
kcov is also able to collect comparison operands from the instrumented code Besides collecting code coverage, KCOV can also collect comparison operands.
(this feature currently requires that the kernel is compiled with clang). See the "Comparison operands collection" section for details.
Besides collecting coverage data from syscall handlers, KCOV can also collect
coverage for annotated parts of the kernel executing in background kernel
tasks or soft interrupts. See the "Remote coverage collection" section for
details.
Prerequisites Prerequisites
------------- -------------
Configure the kernel with:: KCOV relies on compiler instrumentation and requires GCC 6.1.0 or later
or any Clang version supported by the kernel.
Collecting comparison operands is supported with GCC 8+ or with Clang.
To enable KCOV, configure the kernel with::
CONFIG_KCOV=y CONFIG_KCOV=y
CONFIG_KCOV requires gcc 6.1.0 or later. To enable comparison operands collection, set::
If the comparison operands need to be collected, set::
CONFIG_KCOV_ENABLE_COMPARISONS=y CONFIG_KCOV_ENABLE_COMPARISONS=y
Profiling data will only become accessible once debugfs has been mounted:: Coverage data only becomes accessible once debugfs has been mounted::
mount -t debugfs none /sys/kernel/debug mount -t debugfs none /sys/kernel/debug
Coverage collection Coverage collection
------------------- -------------------
The following program demonstrates coverage collection from within a test The following program demonstrates how to use KCOV to collect coverage for a
program using kcov: single syscall from within a test program:
.. code-block:: c .. code-block:: c
@ -84,7 +92,7 @@ program using kcov:
perror("ioctl"), exit(1); perror("ioctl"), exit(1);
/* Reset coverage from the tail of the ioctl() call. */ /* Reset coverage from the tail of the ioctl() call. */
__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED); __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
/* That's the target syscal call. */ /* Call the target syscall call. */
read(-1, NULL, 0); read(-1, NULL, 0);
/* Read number of PCs collected. */ /* Read number of PCs collected. */
n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED); n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
@ -103,7 +111,7 @@ program using kcov:
return 0; return 0;
} }
After piping through addr2line output of the program looks as follows:: After piping through ``addr2line`` the output of the program looks as follows::
SyS_read SyS_read
fs/read_write.c:562 fs/read_write.c:562
@ -121,12 +129,13 @@ After piping through addr2line output of the program looks as follows::
fs/read_write.c:562 fs/read_write.c:562
If a program needs to collect coverage from several threads (independently), If a program needs to collect coverage from several threads (independently),
it needs to open /sys/kernel/debug/kcov in each thread separately. it needs to open ``/sys/kernel/debug/kcov`` in each thread separately.
The interface is fine-grained to allow efficient forking of test processes. The interface is fine-grained to allow efficient forking of test processes.
That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode, That is, a parent process opens ``/sys/kernel/debug/kcov``, enables trace mode,
mmaps coverage buffer and then forks child processes in a loop. Child processes mmaps coverage buffer, and then forks child processes in a loop. The child
only need to enable coverage (disable happens automatically on thread end). processes only need to enable coverage (it gets disabled automatically when
a thread exits).
Comparison operands collection Comparison operands collection
------------------------------ ------------------------------
@ -205,52 +214,78 @@ Comparison operands collection is similar to coverage collection:
return 0; return 0;
} }
Note that the kcov modes (coverage collection or comparison operands) are Note that the KCOV modes (collection of code coverage or comparison operands)
mutually exclusive. are mutually exclusive.
Remote coverage collection Remote coverage collection
-------------------------- --------------------------
With KCOV_ENABLE coverage is collected only for syscalls that are issued Besides collecting coverage data from handlers of syscalls issued from a
from the current process. With KCOV_REMOTE_ENABLE it's possible to collect userspace process, KCOV can also collect coverage for parts of the kernel
coverage for arbitrary parts of the kernel code, provided that those parts executing in other contexts - so-called "remote" coverage.
are annotated with kcov_remote_start()/kcov_remote_stop().
This allows to collect coverage from two types of kernel background Using KCOV to collect remote coverage requires:
threads: the global ones, that are spawned during kernel boot in a limited
number of instances (e.g. one USB hub_event() worker thread is spawned per
USB HCD); and the local ones, that are spawned when a user interacts with
some kernel interface (e.g. vhost workers); as well as from soft
interrupts.
To enable collecting coverage from a global background thread or from a 1. Modifying kernel code to annotate the code section from where coverage
softirq, a unique global handle must be assigned and passed to the should be collected with ``kcov_remote_start`` and ``kcov_remote_stop``.
corresponding kcov_remote_start() call. Then a userspace process can pass
a list of such handles to the KCOV_REMOTE_ENABLE ioctl in the handles
array field of the kcov_remote_arg struct. This will attach the used kcov
device to the code sections, that are referenced by those handles.
Since there might be many local background threads spawned from different 2. Using ``KCOV_REMOTE_ENABLE`` instead of ``KCOV_ENABLE`` in the userspace
userspace processes, we can't use a single global handle per annotation. process that collects coverage.
Instead, the userspace process passes a non-zero handle through the
common_handle field of the kcov_remote_arg struct. This common handle gets
saved to the kcov_handle field in the current task_struct and needs to be
passed to the newly spawned threads via custom annotations. Those threads
should in turn be annotated with kcov_remote_start()/kcov_remote_stop().
Internally kcov stores handles as u64 integers. The top byte of a handle Both ``kcov_remote_start`` and ``kcov_remote_stop`` annotations and the
is used to denote the id of a subsystem that this handle belongs to, and ``KCOV_REMOTE_ENABLE`` ioctl accept handles that identify particular coverage
the lower 4 bytes are used to denote the id of a thread instance within collection sections. The way a handle is used depends on the context where the
that subsystem. A reserved value 0 is used as a subsystem id for common matching code section executes.
handles as they don't belong to a particular subsystem. The bytes 4-7 are
currently reserved and must be zero. In the future the number of bytes
used for the subsystem or handle ids might be increased.
When a particular userspace process collects coverage via a common KCOV supports collecting remote coverage from the following contexts:
handle, kcov will collect coverage for each code section that is annotated
to use the common handle obtained as kcov_handle from the current 1. Global kernel background tasks. These are the tasks that are spawned during
task_struct. However non common handles allow to collect coverage kernel boot in a limited number of instances (e.g. one USB ``hub_event``
selectively from different subsystems. worker is spawned per one USB HCD).
2. Local kernel background tasks. These are spawned when a userspace process
interacts with some kernel interface and are usually killed when the process
exits (e.g. vhost workers).
3. Soft interrupts.
For #1 and #3, a unique global handle must be chosen and passed to the
corresponding ``kcov_remote_start`` call. Then a userspace process must pass
this handle to ``KCOV_REMOTE_ENABLE`` in the ``handles`` array field of the
``kcov_remote_arg`` struct. This will attach the used KCOV device to the code
section referenced by this handle. Multiple global handles identifying
different code sections can be passed at once.
For #2, the userspace process instead must pass a non-zero handle through the
``common_handle`` field of the ``kcov_remote_arg`` struct. This common handle
gets saved to the ``kcov_handle`` field in the current ``task_struct`` and
needs to be passed to the newly spawned local tasks via custom kernel code
modifications. Those tasks should in turn use the passed handle in their
``kcov_remote_start`` and ``kcov_remote_stop`` annotations.
KCOV follows a predefined format for both global and common handles. Each
handle is a ``u64`` integer. Currently, only the one top and the lower 4 bytes
are used. Bytes 4-7 are reserved and must be zero.
For global handles, the top byte of the handle denotes the id of a subsystem
this handle belongs to. For example, KCOV uses ``1`` as the USB subsystem id.
The lower 4 bytes of a global handle denote the id of a task instance within
that subsystem. For example, each ``hub_event`` worker uses the USB bus number
as the task instance id.
For common handles, a reserved value ``0`` is used as a subsystem id, as such
handles don't belong to a particular subsystem. The lower 4 bytes of a common
handle identify a collective instance of all local tasks spawned by the
userspace process that passed a common handle to ``KCOV_REMOTE_ENABLE``.
In practice, any value can be used for common handle instance id if coverage
is only collected from a single userspace process on the system. However, if
common handles are used by multiple processes, unique instance ids must be
used for each process. One option is to use the process id as the common
handle instance id.
The following program demonstrates using KCOV to collect coverage from both
local tasks spawned by the process and the global task that handles USB bus #1:
.. code-block:: c .. code-block:: c

View File

@ -179,6 +179,7 @@ read the file /proc/PID/status::
Gid: 100 100 100 100 Gid: 100 100 100 100
FDSize: 256 FDSize: 256
Groups: 100 14 16 Groups: 100 14 16
Kthread: 0
VmPeak: 5004 kB VmPeak: 5004 kB
VmSize: 5004 kB VmSize: 5004 kB
VmLck: 0 kB VmLck: 0 kB
@ -256,6 +257,7 @@ It's slow but very precise.
NSpid descendant namespace process ID hierarchy NSpid descendant namespace process ID hierarchy
NSpgid descendant namespace process group ID hierarchy NSpgid descendant namespace process group ID hierarchy
NSsid descendant namespace session ID hierarchy NSsid descendant namespace session ID hierarchy
Kthread kernel thread flag, 1 is yes, 0 is no
VmPeak peak virtual memory size VmPeak peak virtual memory size
VmSize total program size VmSize total program size
VmLck locked memory size VmLck locked memory size

View File

@ -207,8 +207,8 @@ the patch::
Fixes: 1f2e3d4c5b6a ("The first line of the commit specified by the first 12 characters of its SHA-1 ID") Fixes: 1f2e3d4c5b6a ("The first line of the commit specified by the first 12 characters of its SHA-1 ID")
Another tag is used for linking web pages with additional backgrounds or Another tag is used for linking web pages with additional backgrounds or
details, for example a report about a bug fixed by the patch or a document details, for example an earlier discussion which leads to the patch or a
with a specification implemented by the patch:: document with a specification implemented by the patch::
Link: https://example.com/somewhere.html optional-other-stuff Link: https://example.com/somewhere.html optional-other-stuff
@ -217,7 +217,17 @@ latest public review posting of the patch; often this is automatically done
by tools like b4 or a git hook like the one described in by tools like b4 or a git hook like the one described in
'Documentation/maintainer/configure-git.rst'. 'Documentation/maintainer/configure-git.rst'.
A third kind of tag is used to document who was involved in the development of If the URL points to a public bug report being fixed by the patch, use the
"Closes:" tag instead::
Closes: https://example.com/issues/1234 optional-other-stuff
Some bug trackers have the ability to close issues automatically when a
commit with such a tag is applied. Some bots monitoring mailing lists can
also track such tags and take certain actions. Private bug trackers and
invalid URLs are forbidden.
Another kind of tag is used to document who was involved in the development of
the patch. Each of these uses this format:: the patch. Each of these uses this format::
tag: Full Name <email address> optional-other-stuff tag: Full Name <email address> optional-other-stuff
@ -251,8 +261,10 @@ The tags in common use are:
- Reported-by: names a user who reported a problem which is fixed by this - Reported-by: names a user who reported a problem which is fixed by this
patch; this tag is used to give credit to the (often underappreciated) patch; this tag is used to give credit to the (often underappreciated)
people who test our code and let us know when things do not work people who test our code and let us know when things do not work
correctly. Note, this tag should be followed by a Link: tag pointing to the correctly. Note, this tag should be followed by a Closes: tag pointing to
report, unless the report is not available on the web. the report, unless the report is not available on the web. The Link: tag
can be used instead of Closes: if the patch fixes a part of the issue(s)
being reported.
- Cc: the named person received a copy of the patch and had the - Cc: the named person received a copy of the patch and had the
opportunity to comment on it. opportunity to comment on it.

View File

@ -113,11 +113,9 @@ there is no collision with your six-character ID now, that condition may
change five years from now. change five years from now.
If related discussions or any other background information behind the change If related discussions or any other background information behind the change
can be found on the web, add 'Link:' tags pointing to it. In case your patch can be found on the web, add 'Link:' tags pointing to it. If the patch is a
fixes a bug, for example, add a tag with a URL referencing the report in the result of some earlier mailing list discussions or something documented on the
mailing list archives or a bug tracker; if the patch is a result of some web, point to it.
earlier mailing list discussion or something documented on the web, point to
it.
When linking to mailing list archives, preferably use the lore.kernel.org When linking to mailing list archives, preferably use the lore.kernel.org
message archiver service. To create the link URL, use the contents of the message archiver service. To create the link URL, use the contents of the
@ -134,6 +132,16 @@ resources. In addition to giving a URL to a mailing list archive or bug,
summarize the relevant points of the discussion that led to the summarize the relevant points of the discussion that led to the
patch as submitted. patch as submitted.
In case your patch fixes a bug, use the 'Closes:' tag with a URL referencing
the report in the mailing list archives or a public bug tracker. For example::
Closes: https://example.com/issues/1234
Some bug trackers have the ability to close issues automatically when a
commit with such a tag is applied. Some bots monitoring mailing lists can
also track such tags and take certain actions. Private bug trackers and
invalid URLs are forbidden.
If your patch fixes a bug in a specific commit, e.g. you found an issue using If your patch fixes a bug in a specific commit, e.g. you found an issue using
``git bisect``, please use the 'Fixes:' tag with the first 12 characters of ``git bisect``, please use the 'Fixes:' tag with the first 12 characters of
the SHA-1 ID, and the one line summary. Do not split the tag across multiple the SHA-1 ID, and the one line summary. Do not split the tag across multiple
@ -495,9 +503,11 @@ Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes:
The Reported-by tag gives credit to people who find bugs and report them and it The Reported-by tag gives credit to people who find bugs and report them and it
hopefully inspires them to help us again in the future. The tag is intended for hopefully inspires them to help us again in the future. The tag is intended for
bugs; please do not use it to credit feature requests. The tag should be bugs; please do not use it to credit feature requests. The tag should be
followed by a Link: tag pointing to the report, unless the report is not followed by a Closes: tag pointing to the report, unless the report is not
available on the web. Please note that if the bug was reported in private, then available on the web. The Link: tag can be used instead of Closes: if the patch
ask for permission first before using the Reported-by tag. fixes a part of the issue(s) being reported. Please note that if the bug was
reported in private, then ask for permission first before using the Reported-by
tag.
A Tested-by: tag indicates that the patch has been successfully tested (in A Tested-by: tag indicates that the patch has been successfully tested (in
some environment) by the person named. This tag informs maintainers that some environment) by the person named. This tag informs maintainers that

View File

@ -92,15 +92,15 @@ getdelays命令的一般格式::
CPU count real total virtual total delay total delay average CPU count real total virtual total delay total delay average
8 7000000 6872122 3382277 0.423ms 8 7000000 6872122 3382277 0.423ms
IO count delay total delay average IO count delay total delay average
0 0 0ms 0 0 0.000ms
SWAP count delay total delay average SWAP count delay total delay average
0 0 0ms 0 0 0.000ms
RECLAIM count delay total delay average RECLAIM count delay total delay average
0 0 0ms 0 0 0.000ms
THRASHING count delay total delay average THRASHING count delay total delay average
0 0 0ms 0 0 0.000ms
COMPACT count delay total delay average COMPACT count delay total delay average
0 0 0ms 0 0 0.000ms
WPCOPY count delay total delay average WPCOPY count delay total delay average
0 0 0ms 0 0 0ms

View File

@ -7564,12 +7564,6 @@ T: git git://linuxtv.org/media_tree.git
F: Documentation/admin-guide/media/em28xx* F: Documentation/admin-guide/media/em28xx*
F: drivers/media/usb/em28xx/ F: drivers/media/usb/em28xx/
EMBEDDED LINUX
M: Olivia Mackall <olivia@selenic.com>
M: David Woodhouse <dwmw2@infradead.org>
L: linux-embedded@vger.kernel.org
S: Maintained
EMMC CMDQ HOST CONTROLLER INTERFACE (CQHCI) DRIVER EMMC CMDQ HOST CONTROLLER INTERFACE (CQHCI) DRIVER
M: Adrian Hunter <adrian.hunter@intel.com> M: Adrian Hunter <adrian.hunter@intel.com>
M: Ritesh Harjani <riteshh@codeaurora.org> M: Ritesh Harjani <riteshh@codeaurora.org>

View File

@ -581,7 +581,7 @@ static int salinfo_cpu_pre_down(unsigned int cpu)
* 'data' contains an integer that corresponds to the feature we're * 'data' contains an integer that corresponds to the feature we're
* testing * testing
*/ */
static int proc_salinfo_show(struct seq_file *m, void *v) static int __maybe_unused proc_salinfo_show(struct seq_file *m, void *v)
{ {
unsigned long data = (unsigned long)v; unsigned long data = (unsigned long)v;
seq_puts(m, (sal_platform_features & data) ? "1\n" : "0\n"); seq_puts(m, (sal_platform_features & data) ? "1\n" : "0\n");

View File

@ -77,7 +77,7 @@ skip:
return __per_cpu_start + __per_cpu_offset[smp_processor_id()]; return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
} }
static inline void static inline __init void
alloc_per_cpu_data(void) alloc_per_cpu_data(void)
{ {
size_t size = PERCPU_PAGE_SIZE * num_possible_cpus(); size_t size = PERCPU_PAGE_SIZE * num_possible_cpus();

View File

@ -58,7 +58,7 @@ huge_pte_offset (struct mm_struct *mm, unsigned long addr, unsigned long sz)
pgd = pgd_offset(mm, taddr); pgd = pgd_offset(mm, taddr);
if (pgd_present(*pgd)) { if (pgd_present(*pgd)) {
p4d = p4d_offset(pgd, addr); p4d = p4d_offset(pgd, taddr);
if (p4d_present(*p4d)) { if (p4d_present(*p4d)) {
pud = pud_offset(p4d, taddr); pud = pud_offset(p4d, taddr);
if (pud_present(*pud)) { if (pud_present(*pud)) {

View File

@ -245,7 +245,7 @@ static void read_ehdr(FILE *fp)
die("Unknown ELF version\n"); die("Unknown ELF version\n");
if (ehdr.e_ehsize != sizeof(Elf_Ehdr)) if (ehdr.e_ehsize != sizeof(Elf_Ehdr))
die("Bad Elf header size\n"); die("Bad ELF header size\n");
if (ehdr.e_phentsize != sizeof(Elf_Phdr)) if (ehdr.e_phentsize != sizeof(Elf_Phdr))
die("Bad program header entry\n"); die("Bad program header entry\n");

View File

@ -2,7 +2,7 @@
/* /*
* arch/um/kernel/elf_aux.c * arch/um/kernel/elf_aux.c
* *
* Scan the Elf auxiliary vector provided by the host to extract * Scan the ELF auxiliary vector provided by the host to extract
* information about vsyscall-page, etc. * information about vsyscall-page, etc.
* *
* Copyright (C) 2004 Fujitsu Siemens Computers GmbH * Copyright (C) 2004 Fujitsu Siemens Computers GmbH

View File

@ -200,9 +200,6 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
const Elf_Shdr *symtab); const Elf_Shdr *symtab);
#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add #define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
void *arch_kexec_kernel_image_load(struct kimage *image);
#define arch_kexec_kernel_image_load arch_kexec_kernel_image_load
int arch_kimage_file_post_load_cleanup(struct kimage *image); int arch_kimage_file_post_load_cleanup(struct kimage *image);
#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup #define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
#endif #endif

View File

@ -374,17 +374,6 @@ void machine_kexec(struct kimage *image)
/* arch-dependent functionality related to kexec file-based syscall */ /* arch-dependent functionality related to kexec file-based syscall */
#ifdef CONFIG_KEXEC_FILE #ifdef CONFIG_KEXEC_FILE
void *arch_kexec_kernel_image_load(struct kimage *image)
{
if (!image->fops || !image->fops->load)
return ERR_PTR(-ENOEXEC);
return image->fops->load(image, image->kernel_buf,
image->kernel_buf_len, image->initrd_buf,
image->initrd_buf_len, image->cmdline_buf,
image->cmdline_buf_len);
}
/* /*
* Apply purgatory relocations. * Apply purgatory relocations.
* *

View File

@ -406,7 +406,7 @@ static void read_ehdr(FILE *fp)
if (ehdr.e_version != EV_CURRENT) if (ehdr.e_version != EV_CURRENT)
die("Unknown ELF version\n"); die("Unknown ELF version\n");
if (ehdr.e_ehsize != sizeof(Elf_Ehdr)) if (ehdr.e_ehsize != sizeof(Elf_Ehdr))
die("Bad Elf header size\n"); die("Bad ELF header size\n");
if (ehdr.e_phentsize != sizeof(Elf_Phdr)) if (ehdr.e_phentsize != sizeof(Elf_Phdr))
die("Bad program header entry\n"); die("Bad program header entry\n");
if (ehdr.e_shentsize != sizeof(Elf_Shdr)) if (ehdr.e_shentsize != sizeof(Elf_Shdr))

View File

@ -294,9 +294,7 @@ EXPORT_SYMBOL_GPL(dca3_get_tag);
*/ */
u8 dca_get_tag(int cpu) u8 dca_get_tag(int cpu)
{ {
struct device *dev = NULL; return dca_common_get_tag(NULL, cpu);
return dca_common_get_tag(dev, cpu);
} }
EXPORT_SYMBOL_GPL(dca_get_tag); EXPORT_SYMBOL_GPL(dca_get_tag);

View File

@ -2924,7 +2924,6 @@ err_unmap_bars:
iounmap(priv->odb_base); iounmap(priv->odb_base);
err_free_res: err_free_res:
pci_release_regions(pdev); pci_release_regions(pdev);
pci_clear_master(pdev);
err_disable_pdev: err_disable_pdev:
pci_disable_device(pdev); pci_disable_device(pdev);
err_clean: err_clean:
@ -2962,7 +2961,6 @@ static void tsi721_remove(struct pci_dev *pdev)
pci_disable_msi(priv->pdev); pci_disable_msi(priv->pdev);
#endif #endif
pci_release_regions(pdev); pci_release_regions(pdev);
pci_clear_master(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
kfree(priv); kfree(priv);
@ -2977,7 +2975,6 @@ static void tsi721_shutdown(struct pci_dev *pdev)
tsi721_disable_ints(priv); tsi721_disable_ints(priv);
tsi721_dma_stop_all(priv); tsi721_dma_stop_all(priv);
pci_clear_master(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
} }

View File

@ -249,7 +249,7 @@ void rproc_coredump(struct rproc *rproc)
return; return;
if (class == ELFCLASSNONE) { if (class == ELFCLASSNONE) {
dev_err(&rproc->dev, "Elf class is not set\n"); dev_err(&rproc->dev, "ELF class is not set\n");
return; return;
} }
@ -361,7 +361,7 @@ void rproc_coredump_using_sections(struct rproc *rproc)
return; return;
if (class == ELFCLASSNONE) { if (class == ELFCLASSNONE) {
dev_err(&rproc->dev, "Elf class is not set\n"); dev_err(&rproc->dev, "ELF class is not set\n");
return; return;
} }

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only // SPDX-License-Identifier: GPL-2.0-only
/* /*
* Remote Processor Framework Elf loader * Remote Processor Framework ELF loader
* *
* Copyright (C) 2011 Texas Instruments, Inc. * Copyright (C) 2011 Texas Instruments, Inc.
* Copyright (C) 2011 Google, Inc. * Copyright (C) 2011 Google, Inc.
@ -39,7 +39,7 @@ int rproc_elf_sanity_check(struct rproc *rproc, const struct firmware *fw)
const char *name = rproc->firmware; const char *name = rproc->firmware;
struct device *dev = &rproc->dev; struct device *dev = &rproc->dev;
/* /*
* Elf files are beginning with the same structure. Thus, to simplify * ELF files are beginning with the same structure. Thus, to simplify
* header parsing, we can use the elf32_hdr one for both elf64 and * header parsing, we can use the elf32_hdr one for both elf64 and
* elf32. * elf32.
*/ */

View File

@ -2058,7 +2058,7 @@ static int elf_core_dump(struct coredump_params *cprm)
has_dumped = 1; has_dumped = 1;
offset += sizeof(elf); /* Elf header */ offset += sizeof(elf); /* ELF header */
offset += segs * sizeof(struct elf_phdr); /* Program headers */ offset += segs * sizeof(struct elf_phdr); /* Program headers */
/* Write notes phdr entry */ /* Write notes phdr entry */

View File

@ -1540,7 +1540,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
fill_note(&auxv_note, "CORE", NT_AUXV, i * sizeof(elf_addr_t), auxv); fill_note(&auxv_note, "CORE", NT_AUXV, i * sizeof(elf_addr_t), auxv);
thread_status_size += notesize(&auxv_note); thread_status_size += notesize(&auxv_note);
offset = sizeof(*elf); /* Elf header */ offset = sizeof(*elf); /* ELF header */
offset += segs * sizeof(struct elf_phdr); /* Program headers */ offset += segs * sizeof(struct elf_phdr); /* Program headers */
/* Write notes phdr entry */ /* Write notes phdr entry */

View File

@ -43,7 +43,7 @@
* LOCKING: * LOCKING:
* There are three level of locking required by epoll : * There are three level of locking required by epoll :
* *
* 1) epmutex (mutex) * 1) epnested_mutex (mutex)
* 2) ep->mtx (mutex) * 2) ep->mtx (mutex)
* 3) ep->lock (rwlock) * 3) ep->lock (rwlock)
* *
@ -57,14 +57,8 @@
* we need a lock that will allow us to sleep. This lock is a * we need a lock that will allow us to sleep. This lock is a
* mutex (ep->mtx). It is acquired during the event transfer loop, * mutex (ep->mtx). It is acquired during the event transfer loop,
* during epoll_ctl(EPOLL_CTL_DEL) and during eventpoll_release_file(). * during epoll_ctl(EPOLL_CTL_DEL) and during eventpoll_release_file().
* Then we also need a global mutex to serialize eventpoll_release_file() * The epnested_mutex is acquired when inserting an epoll fd onto another
* and ep_free(). * epoll fd. We do this so that we walk the epoll tree and ensure that this
* This mutex is acquired by ep_free() during the epoll file
* cleanup path and it is also acquired by eventpoll_release_file()
* if a file has been pushed inside an epoll set and it is then
* close()d without a previous call to epoll_ctl(EPOLL_CTL_DEL).
* It is also acquired when inserting an epoll fd onto another epoll
* fd. We do this so that we walk the epoll tree and ensure that this
* insertion does not create a cycle of epoll file descriptors, which * insertion does not create a cycle of epoll file descriptors, which
* could lead to deadlock. We need a global mutex to prevent two * could lead to deadlock. We need a global mutex to prevent two
* simultaneous inserts (A into B and B into A) from racing and * simultaneous inserts (A into B and B into A) from racing and
@ -80,9 +74,9 @@
* of epoll file descriptors, we use the current recursion depth as * of epoll file descriptors, we use the current recursion depth as
* the lockdep subkey. * the lockdep subkey.
* It is possible to drop the "ep->mtx" and to use the global * It is possible to drop the "ep->mtx" and to use the global
* mutex "epmutex" (together with "ep->lock") to have it working, * mutex "epnested_mutex" (together with "ep->lock") to have it working,
* but having "ep->mtx" will make the interface more scalable. * but having "ep->mtx" will make the interface more scalable.
* Events that require holding "epmutex" are very rare, while for * Events that require holding "epnested_mutex" are very rare, while for
* normal operations the epoll private "ep->mtx" will guarantee * normal operations the epoll private "ep->mtx" will guarantee
* a better scalability. * a better scalability.
*/ */
@ -153,6 +147,13 @@ struct epitem {
/* The file descriptor information this item refers to */ /* The file descriptor information this item refers to */
struct epoll_filefd ffd; struct epoll_filefd ffd;
/*
* Protected by file->f_lock, true for to-be-released epitem already
* removed from the "struct file" items list; together with
* eventpoll->refcount orchestrates "struct eventpoll" disposal
*/
bool dying;
/* List containing poll wait queues */ /* List containing poll wait queues */
struct eppoll_entry *pwqlist; struct eppoll_entry *pwqlist;
@ -217,6 +218,12 @@ struct eventpoll {
u64 gen; u64 gen;
struct hlist_head refs; struct hlist_head refs;
/*
* usage count, used together with epitem->dying to
* orchestrate the disposal of this struct
*/
refcount_t refcount;
#ifdef CONFIG_NET_RX_BUSY_POLL #ifdef CONFIG_NET_RX_BUSY_POLL
/* used to track busy poll napi_id */ /* used to track busy poll napi_id */
unsigned int napi_id; unsigned int napi_id;
@ -240,10 +247,8 @@ struct ep_pqueue {
/* Maximum number of epoll watched descriptors, per user */ /* Maximum number of epoll watched descriptors, per user */
static long max_user_watches __read_mostly; static long max_user_watches __read_mostly;
/* /* Used for cycles detection */
* This mutex is used to serialize ep_free() and eventpoll_release_file(). static DEFINE_MUTEX(epnested_mutex);
*/
static DEFINE_MUTEX(epmutex);
static u64 loop_check_gen = 0; static u64 loop_check_gen = 0;
@ -258,7 +263,7 @@ static struct kmem_cache *pwq_cache __read_mostly;
/* /*
* List of files with newly added links, where we may need to limit the number * List of files with newly added links, where we may need to limit the number
* of emanating paths. Protected by the epmutex. * of emanating paths. Protected by the epnested_mutex.
*/ */
struct epitems_head { struct epitems_head {
struct hlist_head epitems; struct hlist_head epitems;
@ -557,8 +562,7 @@ static void ep_remove_wait_queue(struct eppoll_entry *pwq)
/* /*
* This function unregisters poll callbacks from the associated file * This function unregisters poll callbacks from the associated file
* descriptor. Must be called with "mtx" held (or "epmutex" if called from * descriptor. Must be called with "mtx" held.
* ep_free).
*/ */
static void ep_unregister_pollwait(struct eventpoll *ep, struct epitem *epi) static void ep_unregister_pollwait(struct eventpoll *ep, struct epitem *epi)
{ {
@ -681,11 +685,40 @@ static void epi_rcu_free(struct rcu_head *head)
kmem_cache_free(epi_cache, epi); kmem_cache_free(epi_cache, epi);
} }
static void ep_get(struct eventpoll *ep)
{
refcount_inc(&ep->refcount);
}
/*
* Returns true if the event poll can be disposed
*/
static bool ep_refcount_dec_and_test(struct eventpoll *ep)
{
if (!refcount_dec_and_test(&ep->refcount))
return false;
WARN_ON_ONCE(!RB_EMPTY_ROOT(&ep->rbr.rb_root));
return true;
}
static void ep_free(struct eventpoll *ep)
{
mutex_destroy(&ep->mtx);
free_uid(ep->user);
wakeup_source_unregister(ep->ws);
kfree(ep);
}
/* /*
* Removes a "struct epitem" from the eventpoll RB tree and deallocates * Removes a "struct epitem" from the eventpoll RB tree and deallocates
* all the associated resources. Must be called with "mtx" held. * all the associated resources. Must be called with "mtx" held.
* If the dying flag is set, do the removal only if force is true.
* This prevents ep_clear_and_put() from dropping all the ep references
* while running concurrently with eventpoll_release_file().
* Returns true if the eventpoll can be disposed.
*/ */
static int ep_remove(struct eventpoll *ep, struct epitem *epi) static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force)
{ {
struct file *file = epi->ffd.file; struct file *file = epi->ffd.file;
struct epitems_head *to_free; struct epitems_head *to_free;
@ -700,6 +733,11 @@ static int ep_remove(struct eventpoll *ep, struct epitem *epi)
/* Remove the current item from the list of epoll hooks */ /* Remove the current item from the list of epoll hooks */
spin_lock(&file->f_lock); spin_lock(&file->f_lock);
if (epi->dying && !force) {
spin_unlock(&file->f_lock);
return false;
}
to_free = NULL; to_free = NULL;
head = file->f_ep; head = file->f_ep;
if (head->first == &epi->fllink && !epi->fllink.next) { if (head->first == &epi->fllink && !epi->fllink.next) {
@ -733,28 +771,28 @@ static int ep_remove(struct eventpoll *ep, struct epitem *epi)
call_rcu(&epi->rcu, epi_rcu_free); call_rcu(&epi->rcu, epi_rcu_free);
percpu_counter_dec(&ep->user->epoll_watches); percpu_counter_dec(&ep->user->epoll_watches);
return ep_refcount_dec_and_test(ep);
return 0;
} }
static void ep_free(struct eventpoll *ep) /*
* ep_remove variant for callers owing an additional reference to the ep
*/
static void ep_remove_safe(struct eventpoll *ep, struct epitem *epi)
{ {
struct rb_node *rbp; WARN_ON_ONCE(__ep_remove(ep, epi, false));
}
static void ep_clear_and_put(struct eventpoll *ep)
{
struct rb_node *rbp, *next;
struct epitem *epi; struct epitem *epi;
bool dispose;
/* We need to release all tasks waiting for these file */ /* We need to release all tasks waiting for these file */
if (waitqueue_active(&ep->poll_wait)) if (waitqueue_active(&ep->poll_wait))
ep_poll_safewake(ep, NULL, 0); ep_poll_safewake(ep, NULL, 0);
/* mutex_lock(&ep->mtx);
* We need to lock this because we could be hit by
* eventpoll_release_file() while we're freeing the "struct eventpoll".
* We do not need to hold "ep->mtx" here because the epoll file
* is on the way to be removed and no one has references to it
* anymore. The only hit might come from eventpoll_release_file() but
* holding "epmutex" is sufficient here.
*/
mutex_lock(&epmutex);
/* /*
* Walks through the whole tree by unregistering poll callbacks. * Walks through the whole tree by unregistering poll callbacks.
@ -767,26 +805,25 @@ static void ep_free(struct eventpoll *ep)
} }
/* /*
* Walks through the whole tree by freeing each "struct epitem". At this * Walks through the whole tree and try to free each "struct epitem".
* point we are sure no poll callbacks will be lingering around, and also by * Note that ep_remove_safe() will not remove the epitem in case of a
* holding "epmutex" we can be sure that no file cleanup code will hit * racing eventpoll_release_file(); the latter will do the removal.
* us during this operation. So we can avoid the lock on "ep->lock". * At this point we are sure no poll callbacks will be lingering around.
* We do not need to lock ep->mtx, either, we only do it to prevent * Since we still own a reference to the eventpoll struct, the loop can't
* a lockdep warning. * dispose it.
*/ */
mutex_lock(&ep->mtx); for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = next) {
while ((rbp = rb_first_cached(&ep->rbr)) != NULL) { next = rb_next(rbp);
epi = rb_entry(rbp, struct epitem, rbn); epi = rb_entry(rbp, struct epitem, rbn);
ep_remove(ep, epi); ep_remove_safe(ep, epi);
cond_resched(); cond_resched();
} }
dispose = ep_refcount_dec_and_test(ep);
mutex_unlock(&ep->mtx); mutex_unlock(&ep->mtx);
mutex_unlock(&epmutex); if (dispose)
mutex_destroy(&ep->mtx); ep_free(ep);
free_uid(ep->user);
wakeup_source_unregister(ep->ws);
kfree(ep);
} }
static int ep_eventpoll_release(struct inode *inode, struct file *file) static int ep_eventpoll_release(struct inode *inode, struct file *file)
@ -794,7 +831,7 @@ static int ep_eventpoll_release(struct inode *inode, struct file *file)
struct eventpoll *ep = file->private_data; struct eventpoll *ep = file->private_data;
if (ep) if (ep)
ep_free(ep); ep_clear_and_put(ep);
return 0; return 0;
} }
@ -906,33 +943,34 @@ void eventpoll_release_file(struct file *file)
{ {
struct eventpoll *ep; struct eventpoll *ep;
struct epitem *epi; struct epitem *epi;
struct hlist_node *next; bool dispose;
/* /*
* We don't want to get "file->f_lock" because it is not * Use the 'dying' flag to prevent a concurrent ep_clear_and_put() from
* necessary. It is not necessary because we're in the "struct file" * touching the epitems list before eventpoll_release_file() can access
* cleanup path, and this means that no one is using this file anymore. * the ep->mtx.
* So, for example, epoll_ctl() cannot hit here since if we reach this
* point, the file counter already went to zero and fget() would fail.
* The only hit might come from ep_free() but by holding the mutex
* will correctly serialize the operation. We do need to acquire
* "ep->mtx" after "epmutex" because ep_remove() requires it when called
* from anywhere but ep_free().
*
* Besides, ep_remove() acquires the lock, so we can't hold it here.
*/ */
mutex_lock(&epmutex); again:
if (unlikely(!file->f_ep)) { spin_lock(&file->f_lock);
mutex_unlock(&epmutex); if (file->f_ep && file->f_ep->first) {
return; epi = hlist_entry(file->f_ep->first, struct epitem, fllink);
} epi->dying = true;
hlist_for_each_entry_safe(epi, next, file->f_ep, fllink) { spin_unlock(&file->f_lock);
/*
* ep access is safe as we still own a reference to the ep
* struct
*/
ep = epi->ep; ep = epi->ep;
mutex_lock_nested(&ep->mtx, 0); mutex_lock(&ep->mtx);
ep_remove(ep, epi); dispose = __ep_remove(ep, epi, true);
mutex_unlock(&ep->mtx); mutex_unlock(&ep->mtx);
if (dispose)
ep_free(ep);
goto again;
} }
mutex_unlock(&epmutex); spin_unlock(&file->f_lock);
} }
static int ep_alloc(struct eventpoll **pep) static int ep_alloc(struct eventpoll **pep)
@ -955,6 +993,7 @@ static int ep_alloc(struct eventpoll **pep)
ep->rbr = RB_ROOT_CACHED; ep->rbr = RB_ROOT_CACHED;
ep->ovflist = EP_UNACTIVE_PTR; ep->ovflist = EP_UNACTIVE_PTR;
ep->user = user; ep->user = user;
refcount_set(&ep->refcount, 1);
*pep = ep; *pep = ep;
@ -1223,10 +1262,10 @@ out_unlock:
*/ */
list_del_init(&wait->entry); list_del_init(&wait->entry);
/* /*
* ->whead != NULL protects us from the race with ep_free() * ->whead != NULL protects us from the race with
* or ep_remove(), ep_remove_wait_queue() takes whead->lock * ep_clear_and_put() or ep_remove(), ep_remove_wait_queue()
* held by the caller. Once we nullify it, nothing protects * takes whead->lock held by the caller. Once we nullify it,
* ep/epi or even wait. * nothing protects ep/epi or even wait.
*/ */
smp_store_release(&ep_pwq_from_wait(wait)->whead, NULL); smp_store_release(&ep_pwq_from_wait(wait)->whead, NULL);
} }
@ -1298,7 +1337,7 @@ static void ep_rbtree_insert(struct eventpoll *ep, struct epitem *epi)
* is connected to n file sources. In this case each file source has 1 path * is connected to n file sources. In this case each file source has 1 path
* of length 1. Thus, the numbers below should be more than sufficient. These * of length 1. Thus, the numbers below should be more than sufficient. These
* path limits are enforced during an EPOLL_CTL_ADD operation, since a modify * path limits are enforced during an EPOLL_CTL_ADD operation, since a modify
* and delete can't add additional paths. Protected by the epmutex. * and delete can't add additional paths. Protected by the epnested_mutex.
*/ */
static const int path_limits[PATH_ARR_SIZE] = { 1000, 500, 100, 50, 10 }; static const int path_limits[PATH_ARR_SIZE] = { 1000, 500, 100, 50, 10 };
static int path_count[PATH_ARR_SIZE]; static int path_count[PATH_ARR_SIZE];
@ -1496,16 +1535,22 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
if (tep) if (tep)
mutex_unlock(&tep->mtx); mutex_unlock(&tep->mtx);
/*
* ep_remove_safe() calls in the later error paths can't lead to
* ep_free() as the ep file itself still holds an ep reference.
*/
ep_get(ep);
/* now check if we've created too many backpaths */ /* now check if we've created too many backpaths */
if (unlikely(full_check && reverse_path_check())) { if (unlikely(full_check && reverse_path_check())) {
ep_remove(ep, epi); ep_remove_safe(ep, epi);
return -EINVAL; return -EINVAL;
} }
if (epi->event.events & EPOLLWAKEUP) { if (epi->event.events & EPOLLWAKEUP) {
error = ep_create_wakeup_source(epi); error = ep_create_wakeup_source(epi);
if (error) { if (error) {
ep_remove(ep, epi); ep_remove_safe(ep, epi);
return error; return error;
} }
} }
@ -1529,7 +1574,7 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
* high memory pressure. * high memory pressure.
*/ */
if (unlikely(!epq.epi)) { if (unlikely(!epq.epi)) {
ep_remove(ep, epi); ep_remove_safe(ep, epi);
return -ENOMEM; return -ENOMEM;
} }
@ -2025,7 +2070,7 @@ static int do_epoll_create(int flags)
out_free_fd: out_free_fd:
put_unused_fd(fd); put_unused_fd(fd);
out_free_ep: out_free_ep:
ep_free(ep); ep_clear_and_put(ep);
return error; return error;
} }
@ -2135,7 +2180,7 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
* We do not need to take the global 'epumutex' on EPOLL_CTL_ADD when * We do not need to take the global 'epumutex' on EPOLL_CTL_ADD when
* the epoll file descriptor is attaching directly to a wakeup source, * the epoll file descriptor is attaching directly to a wakeup source,
* unless the epoll file descriptor is nested. The purpose of taking the * unless the epoll file descriptor is nested. The purpose of taking the
* 'epmutex' on add is to prevent complex toplogies such as loops and * 'epnested_mutex' on add is to prevent complex toplogies such as loops and
* deep wakeup paths from forming in parallel through multiple * deep wakeup paths from forming in parallel through multiple
* EPOLL_CTL_ADD operations. * EPOLL_CTL_ADD operations.
*/ */
@ -2146,7 +2191,7 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
if (READ_ONCE(f.file->f_ep) || ep->gen == loop_check_gen || if (READ_ONCE(f.file->f_ep) || ep->gen == loop_check_gen ||
is_file_epoll(tf.file)) { is_file_epoll(tf.file)) {
mutex_unlock(&ep->mtx); mutex_unlock(&ep->mtx);
error = epoll_mutex_lock(&epmutex, 0, nonblock); error = epoll_mutex_lock(&epnested_mutex, 0, nonblock);
if (error) if (error)
goto error_tgt_fput; goto error_tgt_fput;
loop_check_gen++; loop_check_gen++;
@ -2180,10 +2225,16 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
error = -EEXIST; error = -EEXIST;
break; break;
case EPOLL_CTL_DEL: case EPOLL_CTL_DEL:
if (epi) if (epi) {
error = ep_remove(ep, epi); /*
else * The eventpoll itself is still alive: the refcount
* can't go to zero here.
*/
ep_remove_safe(ep, epi);
error = 0;
} else {
error = -ENOENT; error = -ENOENT;
}
break; break;
case EPOLL_CTL_MOD: case EPOLL_CTL_MOD:
if (epi) { if (epi) {
@ -2201,7 +2252,7 @@ error_tgt_fput:
if (full_check) { if (full_check) {
clear_tfile_check_list(); clear_tfile_check_list();
loop_check_gen++; loop_check_gen++;
mutex_unlock(&epmutex); mutex_unlock(&epnested_mutex);
} }
fdput(tf); fdput(tf);

View File

@ -21,9 +21,8 @@ static void nfs3_prepare_get_acl(struct posix_acl **p)
{ {
struct posix_acl *sentinel = uncached_acl_sentinel(current); struct posix_acl *sentinel = uncached_acl_sentinel(current);
if (cmpxchg(p, ACL_NOT_CACHED, sentinel) != ACL_NOT_CACHED) { /* If the ACL isn't being read yet, set our sentinel. */
/* Not the first reader or sentinel already in place. */ cmpxchg(p, ACL_NOT_CACHED, sentinel);
}
} }
static void nfs3_complete_get_acl(struct posix_acl **p, struct posix_acl *acl) static void nfs3_complete_get_acl(struct posix_acl **p, struct posix_acl *acl)

View File

@ -803,8 +803,8 @@ bail:
* a better backward&forward compatibility, since a small piece of * a better backward&forward compatibility, since a small piece of
* request will be less likely to be broken if disk layout get changed. * request will be less likely to be broken if disk layout get changed.
*/ */
static int ocfs2_info_handle(struct inode *inode, struct ocfs2_info *info, static noinline_for_stack int
int compat_flag) ocfs2_info_handle(struct inode *inode, struct ocfs2_info *info, int compat_flag)
{ {
int i, status = 0; int i, status = 0;
u64 req_addr; u64 req_addr;
@ -840,27 +840,26 @@ bail:
long ocfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) long ocfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
int new_clusters;
int status;
struct ocfs2_space_resv sr;
struct ocfs2_new_group_input input;
struct reflink_arguments args;
const char __user *old_path;
const char __user *new_path;
bool preserve;
struct ocfs2_info info;
void __user *argp = (void __user *)arg; void __user *argp = (void __user *)arg;
int status;
switch (cmd) { switch (cmd) {
case OCFS2_IOC_RESVSP: case OCFS2_IOC_RESVSP:
case OCFS2_IOC_RESVSP64: case OCFS2_IOC_RESVSP64:
case OCFS2_IOC_UNRESVSP: case OCFS2_IOC_UNRESVSP:
case OCFS2_IOC_UNRESVSP64: case OCFS2_IOC_UNRESVSP64:
{
struct ocfs2_space_resv sr;
if (copy_from_user(&sr, (int __user *) arg, sizeof(sr))) if (copy_from_user(&sr, (int __user *) arg, sizeof(sr)))
return -EFAULT; return -EFAULT;
return ocfs2_change_file_space(filp, cmd, &sr); return ocfs2_change_file_space(filp, cmd, &sr);
}
case OCFS2_IOC_GROUP_EXTEND: case OCFS2_IOC_GROUP_EXTEND:
{
int new_clusters;
if (!capable(CAP_SYS_RESOURCE)) if (!capable(CAP_SYS_RESOURCE))
return -EPERM; return -EPERM;
@ -873,8 +872,12 @@ long ocfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
status = ocfs2_group_extend(inode, new_clusters); status = ocfs2_group_extend(inode, new_clusters);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return status; return status;
}
case OCFS2_IOC_GROUP_ADD: case OCFS2_IOC_GROUP_ADD:
case OCFS2_IOC_GROUP_ADD64: case OCFS2_IOC_GROUP_ADD64:
{
struct ocfs2_new_group_input input;
if (!capable(CAP_SYS_RESOURCE)) if (!capable(CAP_SYS_RESOURCE))
return -EPERM; return -EPERM;
@ -887,7 +890,14 @@ long ocfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
status = ocfs2_group_add(inode, &input); status = ocfs2_group_add(inode, &input);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return status; return status;
}
case OCFS2_IOC_REFLINK: case OCFS2_IOC_REFLINK:
{
struct reflink_arguments args;
const char __user *old_path;
const char __user *new_path;
bool preserve;
if (copy_from_user(&args, argp, sizeof(args))) if (copy_from_user(&args, argp, sizeof(args)))
return -EFAULT; return -EFAULT;
old_path = (const char __user *)(unsigned long)args.old_path; old_path = (const char __user *)(unsigned long)args.old_path;
@ -895,11 +905,16 @@ long ocfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
preserve = (args.preserve != 0); preserve = (args.preserve != 0);
return ocfs2_reflink_ioctl(inode, old_path, new_path, preserve); return ocfs2_reflink_ioctl(inode, old_path, new_path, preserve);
}
case OCFS2_IOC_INFO: case OCFS2_IOC_INFO:
{
struct ocfs2_info info;
if (copy_from_user(&info, argp, sizeof(struct ocfs2_info))) if (copy_from_user(&info, argp, sizeof(struct ocfs2_info)))
return -EFAULT; return -EFAULT;
return ocfs2_info_handle(inode, &info, 0); return ocfs2_info_handle(inode, &info, 0);
}
case FITRIM: case FITRIM:
{ {
struct super_block *sb = inode->i_sb; struct super_block *sb = inode->i_sb;

View File

@ -219,6 +219,8 @@ static inline void task_state(struct seq_file *m, struct pid_namespace *ns,
seq_put_decimal_ull(m, "\t", task_session_nr_ns(p, pid->numbers[g].ns)); seq_put_decimal_ull(m, "\t", task_session_nr_ns(p, pid->numbers[g].ns));
#endif #endif
seq_putc(m, '\n'); seq_putc(m, '\n');
seq_printf(m, "Kthread:\t%c\n", p->flags & PF_KTHREAD ? '1' : '0');
} }
void render_sigset_t(struct seq_file *m, const char *header, void render_sigset_t(struct seq_file *m, const char *header,

View File

@ -700,7 +700,6 @@ int proc_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
return error; return error;
setattr_copy(&nop_mnt_idmap, inode, attr); setattr_copy(&nop_mnt_idmap, inode, attr);
mark_inode_dirty(inode);
return 0; return 0;
} }

View File

@ -127,7 +127,6 @@ static int proc_notify_change(struct mnt_idmap *idmap,
return error; return error;
setattr_copy(&nop_mnt_idmap, inode, iattr); setattr_copy(&nop_mnt_idmap, inode, iattr);
mark_inode_dirty(inode);
proc_set_user(de, inode->i_uid, inode->i_gid); proc_set_user(de, inode->i_uid, inode->i_gid);
de->mode = inode->i_mode; de->mode = inode->i_mode;

View File

@ -841,7 +841,6 @@ static int proc_sys_setattr(struct mnt_idmap *idmap,
return error; return error;
setattr_copy(&nop_mnt_idmap, inode, attr); setattr_copy(&nop_mnt_idmap, inode, attr);
mark_inode_dirty(inode);
return 0; return 0;
} }

View File

@ -22,30 +22,6 @@
#define arch_irq_stat() 0 #define arch_irq_stat() 0
#endif #endif
#ifdef arch_idle_time
u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
{
u64 idle;
idle = kcs->cpustat[CPUTIME_IDLE];
if (cpu_online(cpu) && !nr_iowait_cpu(cpu))
idle += arch_idle_time(cpu);
return idle;
}
static u64 get_iowait_time(struct kernel_cpustat *kcs, int cpu)
{
u64 iowait;
iowait = kcs->cpustat[CPUTIME_IOWAIT];
if (cpu_online(cpu) && nr_iowait_cpu(cpu))
iowait += arch_idle_time(cpu);
return iowait;
}
#else
u64 get_idle_time(struct kernel_cpustat *kcs, int cpu) u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
{ {
u64 idle, idle_usecs = -1ULL; u64 idle, idle_usecs = -1ULL;
@ -78,8 +54,6 @@ static u64 get_iowait_time(struct kernel_cpustat *kcs, int cpu)
return iowait; return iowait;
} }
#endif
static void show_irq_gap(struct seq_file *p, unsigned int gap) static void show_irq_gap(struct seq_file *p, unsigned int gap)
{ {
static const char zeros[] = " 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"; static const char zeros[] = " 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0";

View File

@ -339,7 +339,7 @@ static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos)
return acc; return acc;
} }
/* Read Elf note segment */ /* Read ELF note segment */
if (*fpos < elfcorebuf_sz + elfnotes_sz) { if (*fpos < elfcorebuf_sz + elfnotes_sz) {
void *kaddr; void *kaddr;
@ -1109,7 +1109,7 @@ static int __init process_ptload_program_headers_elf64(char *elfptr,
ehdr_ptr = (Elf64_Ehdr *)elfptr; ehdr_ptr = (Elf64_Ehdr *)elfptr;
phdr_ptr = (Elf64_Phdr*)(elfptr + sizeof(Elf64_Ehdr)); /* PT_NOTE hdr */ phdr_ptr = (Elf64_Phdr*)(elfptr + sizeof(Elf64_Ehdr)); /* PT_NOTE hdr */
/* Skip Elf header, program headers and Elf note segment. */ /* Skip ELF header, program headers and ELF note segment. */
vmcore_off = elfsz + elfnotes_sz; vmcore_off = elfsz + elfnotes_sz;
for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) { for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
@ -1152,7 +1152,7 @@ static int __init process_ptload_program_headers_elf32(char *elfptr,
ehdr_ptr = (Elf32_Ehdr *)elfptr; ehdr_ptr = (Elf32_Ehdr *)elfptr;
phdr_ptr = (Elf32_Phdr*)(elfptr + sizeof(Elf32_Ehdr)); /* PT_NOTE hdr */ phdr_ptr = (Elf32_Phdr*)(elfptr + sizeof(Elf32_Ehdr)); /* PT_NOTE hdr */
/* Skip Elf header, program headers and Elf note segment. */ /* Skip ELF header, program headers and ELF note segment. */
vmcore_off = elfsz + elfnotes_sz; vmcore_off = elfsz + elfnotes_sz;
for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) { for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
@ -1188,7 +1188,7 @@ static void set_vmcore_list_offsets(size_t elfsz, size_t elfnotes_sz,
loff_t vmcore_off; loff_t vmcore_off;
struct vmcore *m; struct vmcore *m;
/* Skip Elf header, program headers and Elf note segment. */ /* Skip ELF header, program headers and ELF note segment. */
vmcore_off = elfsz + elfnotes_sz; vmcore_off = elfsz + elfnotes_sz;
list_for_each_entry(m, vc_list, list) { list_for_each_entry(m, vc_list, list) {
@ -1213,7 +1213,7 @@ static int __init parse_crash_elf64_headers(void)
addr = elfcorehdr_addr; addr = elfcorehdr_addr;
/* Read Elf header */ /* Read ELF header */
rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf64_Ehdr), &addr); rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf64_Ehdr), &addr);
if (rc < 0) if (rc < 0)
return rc; return rc;
@ -1269,7 +1269,7 @@ static int __init parse_crash_elf32_headers(void)
addr = elfcorehdr_addr; addr = elfcorehdr_addr;
/* Read Elf header */ /* Read ELF header */
rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf32_Ehdr), &addr); rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf32_Ehdr), &addr);
if (rc < 0) if (rc < 0)
return rc; return rc;
@ -1376,12 +1376,12 @@ static void vmcoredd_write_header(void *buf, struct vmcoredd_data *data,
} }
/** /**
* vmcoredd_update_program_headers - Update all Elf program headers * vmcoredd_update_program_headers - Update all ELF program headers
* @elfptr: Pointer to elf header * @elfptr: Pointer to elf header
* @elfnotesz: Size of elf notes aligned to page size * @elfnotesz: Size of elf notes aligned to page size
* @vmcoreddsz: Size of device dumps to be added to elf note header * @vmcoreddsz: Size of device dumps to be added to elf note header
* *
* Determine type of Elf header (Elf64 or Elf32) and update the elf note size. * Determine type of ELF header (Elf64 or Elf32) and update the elf note size.
* Also update the offsets of all the program headers after the elf note header. * Also update the offsets of all the program headers after the elf note header.
*/ */
static void vmcoredd_update_program_headers(char *elfptr, size_t elfnotesz, static void vmcoredd_update_program_headers(char *elfptr, size_t elfnotesz,
@ -1439,10 +1439,10 @@ static void vmcoredd_update_program_headers(char *elfptr, size_t elfnotesz,
/** /**
* vmcoredd_update_size - Update the total size of the device dumps and update * vmcoredd_update_size - Update the total size of the device dumps and update
* Elf header * ELF header
* @dump_size: Size of the current device dump to be added to total size * @dump_size: Size of the current device dump to be added to total size
* *
* Update the total size of all the device dumps and update the Elf program * Update the total size of all the device dumps and update the ELF program
* headers. Calculate the new offsets for the vmcore list and update the * headers. Calculate the new offsets for the vmcore list and update the
* total vmcore size. * total vmcore size.
*/ */
@ -1466,7 +1466,7 @@ static void vmcoredd_update_size(size_t dump_size)
* @data: dump info. * @data: dump info.
* *
* Allocate a buffer and invoke the calling driver's dump collect routine. * Allocate a buffer and invoke the calling driver's dump collect routine.
* Write Elf note at the beginning of the buffer to indicate vmcore device * Write ELF note at the beginning of the buffer to indicate vmcore device
* dump and add the dump to global list. * dump and add the dump to global list.
*/ */
int vmcore_add_device_dump(struct vmcoredd_data *data) int vmcore_add_device_dump(struct vmcoredd_data *data)

View File

@ -48,10 +48,13 @@ struct task_delay_info {
u64 wpcopy_start; u64 wpcopy_start;
u64 wpcopy_delay; /* wait for write-protect copy */ u64 wpcopy_delay; /* wait for write-protect copy */
u64 irq_delay; /* wait for IRQ/SOFTIRQ */
u32 freepages_count; /* total count of memory reclaim */ u32 freepages_count; /* total count of memory reclaim */
u32 thrashing_count; /* total count of thrash waits */ u32 thrashing_count; /* total count of thrash waits */
u32 compact_count; /* total count of memory compact */ u32 compact_count; /* total count of memory compact */
u32 wpcopy_count; /* total count of write-protect copy */ u32 wpcopy_count; /* total count of write-protect copy */
u32 irq_count; /* total count of IRQ/SOFTIRQ */
}; };
#endif #endif
@ -81,6 +84,7 @@ extern void __delayacct_compact_start(void);
extern void __delayacct_compact_end(void); extern void __delayacct_compact_end(void);
extern void __delayacct_wpcopy_start(void); extern void __delayacct_wpcopy_start(void);
extern void __delayacct_wpcopy_end(void); extern void __delayacct_wpcopy_end(void);
extern void __delayacct_irq(struct task_struct *task, u32 delta);
static inline void delayacct_tsk_init(struct task_struct *tsk) static inline void delayacct_tsk_init(struct task_struct *tsk)
{ {
@ -215,6 +219,15 @@ static inline void delayacct_wpcopy_end(void)
__delayacct_wpcopy_end(); __delayacct_wpcopy_end();
} }
static inline void delayacct_irq(struct task_struct *task, u32 delta)
{
if (!static_branch_unlikely(&delayacct_key))
return;
if (task->delays)
__delayacct_irq(task, delta);
}
#else #else
static inline void delayacct_init(void) static inline void delayacct_init(void)
{} {}
@ -253,6 +266,8 @@ static inline void delayacct_wpcopy_start(void)
{} {}
static inline void delayacct_wpcopy_end(void) static inline void delayacct_wpcopy_end(void)
{} {}
static inline void delayacct_irq(struct task_struct *task, u32 delta)
{}
#endif /* CONFIG_TASK_DELAY_ACCT */ #endif /* CONFIG_TASK_DELAY_ACCT */

35
include/linux/hex.h Normal file
View File

@ -0,0 +1,35 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_HEX_H
#define _LINUX_HEX_H
#include <linux/types.h>
extern const char hex_asc[];
#define hex_asc_lo(x) hex_asc[((x) & 0x0f)]
#define hex_asc_hi(x) hex_asc[((x) & 0xf0) >> 4]
static inline char *hex_byte_pack(char *buf, u8 byte)
{
*buf++ = hex_asc_hi(byte);
*buf++ = hex_asc_lo(byte);
return buf;
}
extern const char hex_asc_upper[];
#define hex_asc_upper_lo(x) hex_asc_upper[((x) & 0x0f)]
#define hex_asc_upper_hi(x) hex_asc_upper[((x) & 0xf0) >> 4]
static inline char *hex_byte_pack_upper(char *buf, u8 byte)
{
*buf++ = hex_asc_upper_hi(byte);
*buf++ = hex_asc_upper_lo(byte);
return buf;
}
extern int hex_to_bin(unsigned char ch);
extern int __must_check hex2bin(u8 *dst, const char *src, size_t count);
extern char *bin2hex(char *dst, const void *src, size_t count);
bool mac_pton(const char *s, u8 *mac);
#endif

View File

@ -20,6 +20,7 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/container_of.h> #include <linux/container_of.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/hex.h>
#include <linux/kstrtox.h> #include <linux/kstrtox.h>
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/math.h> #include <linux/math.h>
@ -263,34 +264,6 @@ extern enum system_states {
SYSTEM_SUSPEND, SYSTEM_SUSPEND,
} system_state; } system_state;
extern const char hex_asc[];
#define hex_asc_lo(x) hex_asc[((x) & 0x0f)]
#define hex_asc_hi(x) hex_asc[((x) & 0xf0) >> 4]
static inline char *hex_byte_pack(char *buf, u8 byte)
{
*buf++ = hex_asc_hi(byte);
*buf++ = hex_asc_lo(byte);
return buf;
}
extern const char hex_asc_upper[];
#define hex_asc_upper_lo(x) hex_asc_upper[((x) & 0x0f)]
#define hex_asc_upper_hi(x) hex_asc_upper[((x) & 0xf0) >> 4]
static inline char *hex_byte_pack_upper(char *buf, u8 byte)
{
*buf++ = hex_asc_upper_hi(byte);
*buf++ = hex_asc_upper_lo(byte);
return buf;
}
extern int hex_to_bin(unsigned char ch);
extern int __must_check hex2bin(u8 *dst, const char *src, size_t count);
extern char *bin2hex(char *dst, const void *src, size_t count);
bool mac_pton(const char *s, u8 *mac);
/* /*
* General tracing related utility functions - trace_printk(), * General tracing related utility functions - trace_printk(),
* tracing_on/tracing_off and tracing_start()/tracing_stop * tracing_on/tracing_off and tracing_start()/tracing_stop

View File

@ -190,7 +190,6 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
void *buf, unsigned int size, void *buf, unsigned int size,
bool get_value); bool get_value);
void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name); void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);
void *kexec_image_load_default(struct kimage *image);
#ifndef arch_kexec_kernel_image_probe #ifndef arch_kexec_kernel_image_probe
static inline int static inline int
@ -207,13 +206,6 @@ static inline int arch_kimage_file_post_load_cleanup(struct kimage *image)
} }
#endif #endif
#ifndef arch_kexec_kernel_image_load
static inline void *arch_kexec_kernel_image_load(struct kimage *image)
{
return kexec_image_load_default(image);
}
#endif
#ifdef CONFIG_KEXEC_SIG #ifdef CONFIG_KEXEC_SIG
#ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION #ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION
int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len); int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len);

View File

@ -27,4 +27,11 @@ typedef union {
long long ll; long long ll;
} DWunion; } DWunion;
long long notrace __ashldi3(long long u, word_type b);
long long notrace __ashrdi3(long long u, word_type b);
word_type notrace __cmpdi2(long long a, long long b);
long long notrace __lshrdi3(long long u, word_type b);
long long notrace __muldi3(long long u, long long v);
word_type notrace __ucmpdi2(unsigned long long a, unsigned long long b);
#endif /* __ASM_LIBGCC_H */ #endif /* __ASM_LIBGCC_H */

View File

@ -156,13 +156,13 @@ RB_DECLARE_CALLBACKS(RBSTATIC, RBNAME, \
static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p) static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
{ {
rb->__rb_parent_color = rb_color(rb) | (unsigned long)p; rb->__rb_parent_color = rb_color(rb) + (unsigned long)p;
} }
static inline void rb_set_parent_color(struct rb_node *rb, static inline void rb_set_parent_color(struct rb_node *rb,
struct rb_node *p, int color) struct rb_node *p, int color)
{ {
rb->__rb_parent_color = (unsigned long)p | color; rb->__rb_parent_color = (unsigned long)p + color;
} }
static inline void static inline void

View File

@ -0,0 +1,69 @@
/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM notifier
#if !defined(_TRACE_NOTIFIERS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_NOTIFIERS_H
#include <linux/tracepoint.h>
DECLARE_EVENT_CLASS(notifier_info,
TP_PROTO(void *cb),
TP_ARGS(cb),
TP_STRUCT__entry(
__field(void *, cb)
),
TP_fast_assign(
__entry->cb = cb;
),
TP_printk("%ps", __entry->cb)
);
/*
* notifier_register - called upon notifier callback registration
*
* @cb: callback pointer
*
*/
DEFINE_EVENT(notifier_info, notifier_register,
TP_PROTO(void *cb),
TP_ARGS(cb)
);
/*
* notifier_unregister - called upon notifier callback unregistration
*
* @cb: callback pointer
*
*/
DEFINE_EVENT(notifier_info, notifier_unregister,
TP_PROTO(void *cb),
TP_ARGS(cb)
);
/*
* notifier_run - called upon notifier callback execution
*
* @cb: callback pointer
*
*/
DEFINE_EVENT(notifier_info, notifier_run,
TP_PROTO(void *cb),
TP_ARGS(cb)
);
#endif /* _TRACE_NOTIFIERS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@ -28,7 +28,7 @@
#define _BITUL(x) (_UL(1) << (x)) #define _BITUL(x) (_UL(1) << (x))
#define _BITULL(x) (_ULL(1) << (x)) #define _BITULL(x) (_ULL(1) << (x))
#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1) #define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1)
#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) #define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))

View File

@ -34,7 +34,7 @@
*/ */
#define TASKSTATS_VERSION 13 #define TASKSTATS_VERSION 14
#define TS_COMM_LEN 32 /* should be >= TASK_COMM_LEN #define TS_COMM_LEN 32 /* should be >= TASK_COMM_LEN
* in linux/sched.h */ * in linux/sched.h */
@ -198,6 +198,10 @@ struct taskstats {
/* v13: Delay waiting for write-protect copy */ /* v13: Delay waiting for write-protect copy */
__u64 wpcopy_count; __u64 wpcopy_count;
__u64 wpcopy_delay_total; __u64 wpcopy_delay_total;
/* v14: Delay waiting for IRQ/SOFTIRQ */
__u64 irq_count;
__u64 irq_delay_total;
}; };

View File

@ -179,12 +179,15 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
d->compact_delay_total = (tmp < d->compact_delay_total) ? 0 : tmp; d->compact_delay_total = (tmp < d->compact_delay_total) ? 0 : tmp;
tmp = d->wpcopy_delay_total + tsk->delays->wpcopy_delay; tmp = d->wpcopy_delay_total + tsk->delays->wpcopy_delay;
d->wpcopy_delay_total = (tmp < d->wpcopy_delay_total) ? 0 : tmp; d->wpcopy_delay_total = (tmp < d->wpcopy_delay_total) ? 0 : tmp;
tmp = d->irq_delay_total + tsk->delays->irq_delay;
d->irq_delay_total = (tmp < d->irq_delay_total) ? 0 : tmp;
d->blkio_count += tsk->delays->blkio_count; d->blkio_count += tsk->delays->blkio_count;
d->swapin_count += tsk->delays->swapin_count; d->swapin_count += tsk->delays->swapin_count;
d->freepages_count += tsk->delays->freepages_count; d->freepages_count += tsk->delays->freepages_count;
d->thrashing_count += tsk->delays->thrashing_count; d->thrashing_count += tsk->delays->thrashing_count;
d->compact_count += tsk->delays->compact_count; d->compact_count += tsk->delays->compact_count;
d->wpcopy_count += tsk->delays->wpcopy_count; d->wpcopy_count += tsk->delays->wpcopy_count;
d->irq_count += tsk->delays->irq_count;
raw_spin_unlock_irqrestore(&tsk->delays->lock, flags); raw_spin_unlock_irqrestore(&tsk->delays->lock, flags);
return 0; return 0;
@ -274,3 +277,14 @@ void __delayacct_wpcopy_end(void)
&current->delays->wpcopy_delay, &current->delays->wpcopy_delay,
&current->delays->wpcopy_count); &current->delays->wpcopy_count);
} }
void __delayacct_irq(struct task_struct *task, u32 delta)
{
unsigned long flags;
raw_spin_lock_irqsave(&task->delays->lock, flags);
task->delays->irq_delay += delta;
task->delays->irq_count++;
raw_spin_unlock_irqrestore(&task->delays->lock, flags);
}

View File

@ -28,7 +28,7 @@
/* /*
* The number of tasks checked: * The number of tasks checked:
*/ */
int __read_mostly sysctl_hung_task_check_count = PID_MAX_LIMIT; static int __read_mostly sysctl_hung_task_check_count = PID_MAX_LIMIT;
/* /*
* Limit number of tasks checked in a batch. * Limit number of tasks checked in a batch.
@ -47,9 +47,9 @@ unsigned long __read_mostly sysctl_hung_task_timeout_secs = CONFIG_DEFAULT_HUNG_
/* /*
* Zero (default value) means use sysctl_hung_task_timeout_secs: * Zero (default value) means use sysctl_hung_task_timeout_secs:
*/ */
unsigned long __read_mostly sysctl_hung_task_check_interval_secs; static unsigned long __read_mostly sysctl_hung_task_check_interval_secs;
int __read_mostly sysctl_hung_task_warnings = 10; static int __read_mostly sysctl_hung_task_warnings = 10;
static int __read_mostly did_panic; static int __read_mostly did_panic;
static bool hung_task_show_lock; static bool hung_task_show_lock;
@ -72,8 +72,8 @@ static unsigned int __read_mostly sysctl_hung_task_all_cpu_backtrace;
* Should we panic (and reboot, if panic_timeout= is set) when a * Should we panic (and reboot, if panic_timeout= is set) when a
* hung task is detected: * hung task is detected:
*/ */
unsigned int __read_mostly sysctl_hung_task_panic = static unsigned int __read_mostly sysctl_hung_task_panic =
IS_ENABLED(CONFIG_BOOTPARAM_HUNG_TASK_PANIC); IS_ENABLED(CONFIG_BOOTPARAM_HUNG_TASK_PANIC);
static int static int
hung_task_panic(struct notifier_block *this, unsigned long event, void *ptr) hung_task_panic(struct notifier_block *this, unsigned long event, void *ptr)

View File

@ -65,7 +65,7 @@ int kexec_image_probe_default(struct kimage *image, void *buf,
return ret; return ret;
} }
void *kexec_image_load_default(struct kimage *image) static void *kexec_image_load_default(struct kimage *image)
{ {
if (!image->fops || !image->fops->load) if (!image->fops || !image->fops->load)
return ERR_PTR(-ENOEXEC); return ERR_PTR(-ENOEXEC);
@ -249,8 +249,8 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd,
/* IMA needs to pass the measurement list to the next kernel. */ /* IMA needs to pass the measurement list to the next kernel. */
ima_add_kexec_buffer(image); ima_add_kexec_buffer(image);
/* Call arch image load handlers */ /* Call image load handler */
ldata = arch_kexec_kernel_image_load(image); ldata = kexec_image_load_default(image);
if (IS_ERR(ldata)) { if (IS_ERR(ldata)) {
ret = PTR_ERR(ldata); ret = PTR_ERR(ldata);

View File

@ -7,6 +7,9 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#define CREATE_TRACE_POINTS
#include <trace/events/notifier.h>
/* /*
* Notifier list for kernel code which wants to be called * Notifier list for kernel code which wants to be called
* at shutdown. This is used to stop any idling DMA operations * at shutdown. This is used to stop any idling DMA operations
@ -37,6 +40,7 @@ static int notifier_chain_register(struct notifier_block **nl,
} }
n->next = *nl; n->next = *nl;
rcu_assign_pointer(*nl, n); rcu_assign_pointer(*nl, n);
trace_notifier_register((void *)n->notifier_call);
return 0; return 0;
} }
@ -46,6 +50,7 @@ static int notifier_chain_unregister(struct notifier_block **nl,
while ((*nl) != NULL) { while ((*nl) != NULL) {
if ((*nl) == n) { if ((*nl) == n) {
rcu_assign_pointer(*nl, n->next); rcu_assign_pointer(*nl, n->next);
trace_notifier_unregister((void *)n->notifier_call);
return 0; return 0;
} }
nl = &((*nl)->next); nl = &((*nl)->next);
@ -84,6 +89,7 @@ static int notifier_call_chain(struct notifier_block **nl,
continue; continue;
} }
#endif #endif
trace_notifier_run((void *)nb->notifier_call);
ret = nb->notifier_call(nb, val, v); ret = nb->notifier_call(nb, val, v);
if (nr_calls) if (nr_calls)

View File

@ -704,6 +704,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
rq->prev_irq_time += irq_delta; rq->prev_irq_time += irq_delta;
delta -= irq_delta; delta -= irq_delta;
psi_account_irqtime(rq->curr, irq_delta); psi_account_irqtime(rq->curr, irq_delta);
delayacct_irq(rq->curr, irq_delta);
#endif #endif
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
if (static_key_false((&paravirt_steal_rq_enabled))) { if (static_key_false((&paravirt_steal_rq_enabled))) {

View File

@ -163,7 +163,7 @@ out:
/** /**
* build_id_parse_buf - Get build ID from a buffer * build_id_parse_buf - Get build ID from a buffer
* @buf: Elf note section(s) to parse * @buf: ELF note section(s) to parse
* @buf_size: Size of @buf in bytes * @buf_size: Size of @buf in bytes
* @build_id: Build ID parsed from @buf, at least BUILD_ID_SIZE_MAX long * @build_id: Build ID parsed from @buf, at least BUILD_ID_SIZE_MAX long
* *

View File

@ -58,7 +58,7 @@
static inline void rb_set_black(struct rb_node *rb) static inline void rb_set_black(struct rb_node *rb)
{ {
rb->__rb_parent_color |= RB_BLACK; rb->__rb_parent_color += RB_BLACK;
} }
static inline struct rb_node *rb_red_parent(struct rb_node *red) static inline struct rb_node *rb_red_parent(struct rb_node *red)

View File

@ -587,7 +587,7 @@ static int __init test_string_helpers_init(void)
for (i = 0; i < UNESCAPE_ALL_MASK + 1; i++) for (i = 0; i < UNESCAPE_ALL_MASK + 1; i++)
test_string_unescape("unescape", i, false); test_string_unescape("unescape", i, false);
test_string_unescape("unescape inplace", test_string_unescape("unescape inplace",
get_random_u32_below(UNESCAPE_ANY + 1), true); get_random_u32_below(UNESCAPE_ALL_MASK + 1), true);
/* Without dictionary */ /* Without dictionary */
for (i = 0; i < ESCAPE_ALL_MASK + 1; i++) for (i = 0; i < ESCAPE_ALL_MASK + 1; i++)

View File

@ -49,6 +49,7 @@ EXPORT_SYMBOL(kfree_const);
* *
* Return: newly allocated copy of @s or %NULL in case of error * Return: newly allocated copy of @s or %NULL in case of error
*/ */
noinline
char *kstrdup(const char *s, gfp_t gfp) char *kstrdup(const char *s, gfp_t gfp)
{ {
size_t len; size_t len;

View File

@ -620,6 +620,22 @@ our $signature_tags = qr{(?xi:
Cc: Cc:
)}; )};
our @link_tags = qw(Link Closes);
#Create a search and print patterns for all these strings to be used directly below
our $link_tags_search = "";
our $link_tags_print = "";
foreach my $entry (@link_tags) {
if ($link_tags_search ne "") {
$link_tags_search .= '|';
$link_tags_print .= ' or ';
}
$entry .= ':';
$link_tags_search .= $entry;
$link_tags_print .= "'$entry'";
}
$link_tags_search = "(?:${link_tags_search})";
our $tracing_logging_tags = qr{(?xi: our $tracing_logging_tags = qr{(?xi:
[=-]*> | [=-]*> |
<[=-]* | <[=-]* |
@ -3158,14 +3174,14 @@ sub process {
} }
} }
# check if Reported-by: is followed by a Link: # check if Reported-by: is followed by a Closes: tag
if ($sign_off =~ /^reported(?:|-and-tested)-by:$/i) { if ($sign_off =~ /^reported(?:|-and-tested)-by:$/i) {
if (!defined $lines[$linenr]) { if (!defined $lines[$linenr]) {
WARN("BAD_REPORTED_BY_LINK", WARN("BAD_REPORTED_BY_LINK",
"Reported-by: should be immediately followed by Link: to the report\n" . $herecurr . $rawlines[$linenr] . "\n"); "Reported-by: should be immediately followed by Closes: with a URL to the report\n" . $herecurr . "\n");
} elsif ($rawlines[$linenr] !~ m{^link:\s*https?://}i) { } elsif ($rawlines[$linenr] !~ /^closes:\s*/i) {
WARN("BAD_REPORTED_BY_LINK", WARN("BAD_REPORTED_BY_LINK",
"Reported-by: should be immediately followed by Link: with a URL to the report\n" . $herecurr . $rawlines[$linenr] . "\n"); "Reported-by: should be immediately followed by Closes: with a URL to the report\n" . $herecurr . $rawlines[$linenr] . "\n");
} }
} }
} }
@ -3250,8 +3266,8 @@ sub process {
# file delta changes # file delta changes
$line =~ /^\s*(?:[\w\.\-\+]*\/)++[\w\.\-\+]+:/ || $line =~ /^\s*(?:[\w\.\-\+]*\/)++[\w\.\-\+]+:/ ||
# filename then : # filename then :
$line =~ /^\s*(?:Fixes:|Link:|$signature_tags)/i || $line =~ /^\s*(?:Fixes:|$link_tags_search|$signature_tags)/i ||
# A Fixes: or Link: line or signature tag line # A Fixes:, link or signature tag line
$commit_log_possible_stack_dump)) { $commit_log_possible_stack_dump)) {
WARN("COMMIT_LOG_LONG_LINE", WARN("COMMIT_LOG_LONG_LINE",
"Possible unwrapped commit description (prefer a maximum 75 chars per line)\n" . $herecurr); "Possible unwrapped commit description (prefer a maximum 75 chars per line)\n" . $herecurr);
@ -3266,13 +3282,24 @@ sub process {
# Check for odd tags before a URI/URL # Check for odd tags before a URI/URL
if ($in_commit_log && if ($in_commit_log &&
$line =~ /^\s*(\w+):\s*http/ && $1 ne 'Link') { $line =~ /^\s*(\w+:)\s*http/ && $1 !~ /^$link_tags_search$/) {
if ($1 =~ /^v(?:ersion)?\d+/i) { if ($1 =~ /^v(?:ersion)?\d+/i) {
WARN("COMMIT_LOG_VERSIONING", WARN("COMMIT_LOG_VERSIONING",
"Patch version information should be after the --- line\n" . $herecurr); "Patch version information should be after the --- line\n" . $herecurr);
} else { } else {
WARN("COMMIT_LOG_USE_LINK", WARN("COMMIT_LOG_USE_LINK",
"Unknown link reference '$1:', use 'Link:' instead\n" . $herecurr); "Unknown link reference '$1', use $link_tags_print instead\n" . $herecurr);
}
}
# Check for misuse of the link tags
if ($in_commit_log &&
$line =~ /^\s*(\w+:)\s*(\S+)/) {
my $tag = $1;
my $value = $2;
if ($tag =~ /^$link_tags_search$/ && $value !~ m{^https?://}) {
WARN("COMMIT_LOG_WRONG_LINK",
"'$tag' should be followed by a public http(s) link\n" . $herecurr);
} }
} }
@ -3736,7 +3763,7 @@ sub process {
"'$spdx_license' is not supported in LICENSES/...\n" . $herecurr); "'$spdx_license' is not supported in LICENSES/...\n" . $herecurr);
} }
if ($realfile =~ m@^Documentation/devicetree/bindings/@ && if ($realfile =~ m@^Documentation/devicetree/bindings/@ &&
not $spdx_license =~ /GPL-2\.0.*BSD-2-Clause/) { $spdx_license !~ /GPL-2\.0(?:-only)? OR BSD-2-Clause/) {
my $msg_level = \&WARN; my $msg_level = \&WARN;
$msg_level = \&CHK if ($file); $msg_level = \&CHK if ($file);
if (&{$msg_level}("SPDX_LICENSE_TAG", if (&{$msg_level}("SPDX_LICENSE_TAG",
@ -3746,6 +3773,11 @@ sub process {
$fixed[$fixlinenr] =~ s/SPDX-License-Identifier: .*/SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)/; $fixed[$fixlinenr] =~ s/SPDX-License-Identifier: .*/SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)/;
} }
} }
if ($realfile =~ m@^include/dt-bindings/@ &&
$spdx_license !~ /GPL-2\.0(?:-only)? OR \S+/) {
WARN("SPDX_LICENSE_TAG",
"DT binding headers should be licensed (GPL-2.0-only OR .*)\n" . $herecurr);
}
} }
} }
} }
@ -5809,6 +5841,8 @@ sub process {
$var !~ /^(?:[A-Z]+_){1,5}[A-Z]{1,3}[a-z]/ && $var !~ /^(?:[A-Z]+_){1,5}[A-Z]{1,3}[a-z]/ &&
#Ignore Page<foo> variants #Ignore Page<foo> variants
$var !~ /^(?:Clear|Set|TestClear|TestSet|)Page[A-Z]/ && $var !~ /^(?:Clear|Set|TestClear|TestSet|)Page[A-Z]/ &&
#Ignore ETHTOOL_LINK_MODE_<foo> variants
$var !~ /^ETHTOOL_LINK_MODE_/ &&
#Ignore SI style variants like nS, mV and dB #Ignore SI style variants like nS, mV and dB
#(ie: max_uV, regulator_min_uA_show, RANGE_mA_VALUE) #(ie: max_uV, regulator_min_uA_show, RANGE_mA_VALUE)
$var !~ /^(?:[a-z0-9_]*|[A-Z0-9_]*)?_?[a-z][A-Z](?:_[a-z0-9_]+|_[A-Z0-9_]+)?$/ && $var !~ /^(?:[a-z0-9_]*|[A-Z0-9_]*)?_?[a-z][A-Z](?:_[a-z0-9_]+|_[A-Z0-9_]+)?$/ &&

View File

@ -41,6 +41,8 @@ are cached and potentially out of date"""
self.show_subtree(child, level + 1) self.show_subtree(child, level + 1)
def invoke(self, arg, from_tty): def invoke(self, arg, from_tty):
if utils.gdb_eval_or_none("clk_root_list") is None:
raise gdb.GdbError("No clocks registered")
gdb.write(" enable prepare protect \n") gdb.write(" enable prepare protect \n")
gdb.write(" clock count count count rate \n") gdb.write(" clock count count count rate \n")
gdb.write("------------------------------------------------------------------------\n") gdb.write("------------------------------------------------------------------------\n")

View File

@ -15,8 +15,10 @@
#include <linux/clk-provider.h> #include <linux/clk-provider.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/hrtimer.h> #include <linux/hrtimer.h>
#include <linux/irq.h>
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <linux/radix-tree.h>
#include <linux/threads.h> #include <linux/threads.h>
/* We need to stringify expanded macros so that they can be parsed */ /* We need to stringify expanded macros so that they can be parsed */
@ -39,6 +41,8 @@
import gdb import gdb
LX_CONFIG(CONFIG_DEBUG_INFO_REDUCED)
/* linux/clk-provider.h */ /* linux/clk-provider.h */
if IS_BUILTIN(CONFIG_COMMON_CLK): if IS_BUILTIN(CONFIG_COMMON_CLK):
LX_GDBPARSED(CLK_GET_RATE_NOCACHE) LX_GDBPARSED(CLK_GET_RATE_NOCACHE)
@ -54,6 +58,10 @@ LX_VALUE(SB_NODIRATIME)
/* linux/htimer.h */ /* linux/htimer.h */
LX_GDBPARSED(hrtimer_resolution) LX_GDBPARSED(hrtimer_resolution)
/* linux/irq.h */
LX_GDBPARSED(IRQD_LEVEL)
LX_GDBPARSED(IRQ_HIDDEN)
/* linux/module.h */ /* linux/module.h */
LX_GDBPARSED(MOD_TEXT) LX_GDBPARSED(MOD_TEXT)
@ -71,6 +79,13 @@ LX_VALUE(NR_CPUS)
/* linux/of_fdt.h> */ /* linux/of_fdt.h> */
LX_VALUE(OF_DT_HEADER) LX_VALUE(OF_DT_HEADER)
/* linux/radix-tree.h */
LX_GDBPARSED(RADIX_TREE_ENTRY_MASK)
LX_GDBPARSED(RADIX_TREE_INTERNAL_NODE)
LX_GDBPARSED(RADIX_TREE_MAP_SIZE)
LX_GDBPARSED(RADIX_TREE_MAP_SHIFT)
LX_GDBPARSED(RADIX_TREE_MAP_MASK)
/* Kernel Configs */ /* Kernel Configs */
LX_CONFIG(CONFIG_GENERIC_CLOCKEVENTS) LX_CONFIG(CONFIG_GENERIC_CLOCKEVENTS)
LX_CONFIG(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST) LX_CONFIG(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST)
@ -78,3 +93,12 @@ LX_CONFIG(CONFIG_HIGH_RES_TIMERS)
LX_CONFIG(CONFIG_NR_CPUS) LX_CONFIG(CONFIG_NR_CPUS)
LX_CONFIG(CONFIG_OF) LX_CONFIG(CONFIG_OF)
LX_CONFIG(CONFIG_TICK_ONESHOT) LX_CONFIG(CONFIG_TICK_ONESHOT)
LX_CONFIG(CONFIG_GENERIC_IRQ_SHOW_LEVEL)
LX_CONFIG(CONFIG_X86_LOCAL_APIC)
LX_CONFIG(CONFIG_SMP)
LX_CONFIG(CONFIG_X86_THERMAL_VECTOR)
LX_CONFIG(CONFIG_X86_MCE_THRESHOLD)
LX_CONFIG(CONFIG_X86_MCE_AMD)
LX_CONFIG(CONFIG_X86_MCE)
LX_CONFIG(CONFIG_X86_IO_APIC)
LX_CONFIG(CONFIG_HAVE_KVM)

View File

@ -163,16 +163,22 @@ def get_current_task(cpu):
task_ptr_type = task_type.get_type().pointer() task_ptr_type = task_type.get_type().pointer()
if utils.is_target_arch("x86"): if utils.is_target_arch("x86"):
var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task") if gdb.lookup_global_symbol("cpu_tasks"):
return per_cpu(var_ptr, cpu).dereference() # This is a UML kernel, which stores the current task
# differently than other x86 sub architectures
var_ptr = gdb.parse_and_eval("(struct task_struct *)cpu_tasks[0].task")
return var_ptr.dereference()
else:
var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task")
return per_cpu(var_ptr, cpu).dereference()
elif utils.is_target_arch("aarch64"): elif utils.is_target_arch("aarch64"):
current_task_addr = gdb.parse_and_eval("$SP_EL0") current_task_addr = gdb.parse_and_eval("$SP_EL0")
if((current_task_addr >> 63) != 0): if (current_task_addr >> 63) != 0:
current_task = current_task_addr.cast(task_ptr_type) current_task = current_task_addr.cast(task_ptr_type)
return current_task.dereference() return current_task.dereference()
else: else:
raise gdb.GdbError("Sorry, obtaining the current task is not allowed " raise gdb.GdbError("Sorry, obtaining the current task is not allowed "
"while running in userspace(EL0)") "while running in userspace(EL0)")
else: else:
raise gdb.GdbError("Sorry, obtaining the current task is not yet " raise gdb.GdbError("Sorry, obtaining the current task is not yet "
"supported with this arch") "supported with this arch")

View File

@ -5,7 +5,7 @@
import gdb import gdb
import sys import sys
from linux.utils import CachedType from linux.utils import CachedType, gdb_eval_or_none
from linux.lists import list_for_each_entry from linux.lists import list_for_each_entry
generic_pm_domain_type = CachedType('struct generic_pm_domain') generic_pm_domain_type = CachedType('struct generic_pm_domain')
@ -70,6 +70,8 @@ Output is similar to /sys/kernel/debug/pm_genpd/pm_genpd_summary'''
gdb.write(' %-50s %s\n' % (kobj_path, rtpm_status_str(dev))) gdb.write(' %-50s %s\n' % (kobj_path, rtpm_status_str(dev)))
def invoke(self, arg, from_tty): def invoke(self, arg, from_tty):
if gdb_eval_or_none("&gpd_list") is None:
raise gdb.GdbError("No power domain(s) registered")
gdb.write('domain status children\n'); gdb.write('domain status children\n');
gdb.write(' /device runtime status\n'); gdb.write(' /device runtime status\n');
gdb.write('----------------------------------------------------------------------\n'); gdb.write('----------------------------------------------------------------------\n');

View File

@ -0,0 +1,232 @@
# SPDX-License-Identifier: GPL-2.0
#
# Copyright 2023 Broadcom
import gdb
from linux import constants
from linux import cpus
from linux import utils
from linux import radixtree
irq_desc_type = utils.CachedType("struct irq_desc")
def irq_settings_is_hidden(desc):
return desc['status_use_accessors'] & constants.LX_IRQ_HIDDEN
def irq_desc_is_chained(desc):
return desc['action'] and desc['action'] == gdb.parse_and_eval("&chained_action")
def irqd_is_level(desc):
return desc['irq_data']['common']['state_use_accessors'] & constants.LX_IRQD_LEVEL
def show_irq_desc(prec, irq):
text = ""
desc = radixtree.lookup(gdb.parse_and_eval("&irq_desc_tree"), irq)
if desc is None:
return text
desc = desc.cast(irq_desc_type.get_type())
if desc is None:
return text
if irq_settings_is_hidden(desc):
return text
any_count = 0
if desc['kstat_irqs']:
for cpu in cpus.each_online_cpu():
any_count += cpus.per_cpu(desc['kstat_irqs'], cpu)
if (desc['action'] == 0 or irq_desc_is_chained(desc)) and any_count == 0:
return text;
text += "%*d: " % (prec, irq)
for cpu in cpus.each_online_cpu():
if desc['kstat_irqs']:
count = cpus.per_cpu(desc['kstat_irqs'], cpu)
else:
count = 0
text += "%10u" % (count)
name = "None"
if desc['irq_data']['chip']:
chip = desc['irq_data']['chip']
if chip['name']:
name = chip['name'].string()
else:
name = "-"
text += " %8s" % (name)
if desc['irq_data']['domain']:
text += " %*lu" % (prec, desc['irq_data']['hwirq'])
else:
text += " %*s" % (prec, "")
if constants.LX_CONFIG_GENERIC_IRQ_SHOW_LEVEL:
text += " %-8s" % ("Level" if irqd_is_level(desc) else "Edge")
if desc['name']:
text += "-%-8s" % (desc['name'].string())
""" Some toolchains may not be able to provide information about irqaction """
try:
gdb.lookup_type("struct irqaction")
action = desc['action']
if action is not None:
text += " %s" % (action['name'].string())
while True:
action = action['next']
if action is not None:
break
if action['name']:
text += ", %s" % (action['name'].string())
except:
pass
text += "\n"
return text
def show_irq_err_count(prec):
cnt = utils.gdb_eval_or_none("irq_err_count")
text = ""
if cnt is not None:
text += "%*s: %10u\n" % (prec, "ERR", cnt['counter'])
return text
def x86_show_irqstat(prec, pfx, field, desc):
irq_stat = gdb.parse_and_eval("&irq_stat")
text = "%*s: " % (prec, pfx)
for cpu in cpus.each_online_cpu():
stat = cpus.per_cpu(irq_stat, cpu)
text += "%10u " % (stat[field])
text += " %s\n" % (desc)
return text
def x86_show_mce(prec, var, pfx, desc):
pvar = gdb.parse_and_eval(var)
text = "%*s: " % (prec, pfx)
for cpu in cpus.each_online_cpu():
text += "%10u " % (cpus.per_cpu(pvar, cpu))
text += " %s\n" % (desc)
return text
def x86_show_interupts(prec):
text = x86_show_irqstat(prec, "NMI", '__nmi_count', 'Non-maskable interrupts')
if constants.LX_CONFIG_X86_LOCAL_APIC:
text += x86_show_irqstat(prec, "LOC", 'apic_timer_irqs', "Local timer interrupts")
text += x86_show_irqstat(prec, "SPU", 'irq_spurious_count', "Spurious interrupts")
text += x86_show_irqstat(prec, "PMI", 'apic_perf_irqs', "Performance monitoring interrupts")
text += x86_show_irqstat(prec, "IWI", 'apic_irq_work_irqs', "IRQ work interrupts")
text += x86_show_irqstat(prec, "RTR", 'icr_read_retry_count', "APIC ICR read retries")
if utils.gdb_eval_or_none("x86_platform_ipi_callback") is not None:
text += x86_show_irqstat(prec, "PLT", 'x86_platform_ipis', "Platform interrupts")
if constants.LX_CONFIG_SMP:
text += x86_show_irqstat(prec, "RES", 'irq_resched_count', "Rescheduling interrupts")
text += x86_show_irqstat(prec, "CAL", 'irq_call_count', "Function call interrupts")
text += x86_show_irqstat(prec, "TLB", 'irq_tlb_count', "TLB shootdowns")
if constants.LX_CONFIG_X86_THERMAL_VECTOR:
text += x86_show_irqstat(prec, "TRM", 'irq_thermal_count', "Thermal events interrupts")
if constants.LX_CONFIG_X86_MCE_THRESHOLD:
text += x86_show_irqstat(prec, "THR", 'irq_threshold_count', "Threshold APIC interrupts")
if constants.LX_CONFIG_X86_MCE_AMD:
text += x86_show_irqstat(prec, "DFR", 'irq_deferred_error_count', "Deferred Error APIC interrupts")
if constants.LX_CONFIG_X86_MCE:
text += x86_show_mce(prec, "&mce_exception_count", "MCE", "Machine check exceptions")
text == x86_show_mce(prec, "&mce_poll_count", "MCP", "Machine check polls")
text += show_irq_err_count(prec)
if constants.LX_CONFIG_X86_IO_APIC:
cnt = utils.gdb_eval_or_none("irq_mis_count")
if cnt is not None:
text += "%*s: %10u\n" % (prec, "MIS", cnt['counter'])
if constants.LX_CONFIG_HAVE_KVM:
text += x86_show_irqstat(prec, "PIN", 'kvm_posted_intr_ipis', 'Posted-interrupt notification event')
text += x86_show_irqstat(prec, "NPI", 'kvm_posted_intr_nested_ipis', 'Nested posted-interrupt event')
text += x86_show_irqstat(prec, "PIW", 'kvm_posted_intr_wakeup_ipis', 'Posted-interrupt wakeup event')
return text
def arm_common_show_interrupts(prec):
text = ""
nr_ipi = utils.gdb_eval_or_none("nr_ipi")
ipi_desc = utils.gdb_eval_or_none("ipi_desc")
ipi_types = utils.gdb_eval_or_none("ipi_types")
if nr_ipi is None or ipi_desc is None or ipi_types is None:
return text
if prec >= 4:
sep = " "
else:
sep = ""
for ipi in range(nr_ipi):
text += "%*s%u:%s" % (prec - 1, "IPI", ipi, sep)
desc = ipi_desc[ipi].cast(irq_desc_type.get_type().pointer())
if desc == 0:
continue
for cpu in cpus.each_online_cpu():
text += "%10u" % (cpus.per_cpu(desc['kstat_irqs'], cpu))
text += " %s" % (ipi_types[ipi].string())
text += "\n"
return text
def aarch64_show_interrupts(prec):
text = arm_common_show_interrupts(prec)
text += "%*s: %10lu\n" % (prec, "ERR", gdb.parse_and_eval("irq_err_count"))
return text
def arch_show_interrupts(prec):
text = ""
if utils.is_target_arch("x86"):
text += x86_show_interupts(prec)
elif utils.is_target_arch("aarch64"):
text += aarch64_show_interrupts(prec)
elif utils.is_target_arch("arm"):
text += arm_common_show_interrupts(prec)
elif utils.is_target_arch("mips"):
text += show_irq_err_count(prec)
else:
raise gdb.GdbError("Unsupported architecture: {}".format(target_arch))
return text
class LxInterruptList(gdb.Command):
"""Print /proc/interrupts"""
def __init__(self):
super(LxInterruptList, self).__init__("lx-interruptlist", gdb.COMMAND_DATA)
def invoke(self, arg, from_tty):
nr_irqs = gdb.parse_and_eval("nr_irqs")
prec = 3
j = 1000
while prec < 10 and j <= nr_irqs:
prec += 1
j *= 10
gdb.write("%*s" % (prec + 8, ""))
for cpu in cpus.each_online_cpu():
gdb.write("CPU%-8d" % cpu)
gdb.write("\n")
if utils.gdb_eval_or_none("&irq_desc_tree") is None:
return
for irq in range(nr_irqs):
gdb.write(show_irq_desc(prec, irq))
gdb.write(arch_show_interrupts(prec))
LxInterruptList()

View File

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
# #
# gdb helper commands and functions for Linux kernel debugging # gdb helper commands and functions for Linux kernel debugging
# #
@ -16,6 +17,7 @@ from linux import constants
from linux import utils from linux import utils
from linux import tasks from linux import tasks
from linux import lists from linux import lists
from linux import vfs
from struct import * from struct import *
@ -170,16 +172,16 @@ values of that process namespace"""
gdb.write("{:^18} {:^15} {:>9} {} {} options\n".format( gdb.write("{:^18} {:^15} {:>9} {} {} options\n".format(
"mount", "super_block", "devname", "pathname", "fstype")) "mount", "super_block", "devname", "pathname", "fstype"))
for vfs in lists.list_for_each_entry(namespace['list'], for mnt in lists.list_for_each_entry(namespace['list'],
mount_ptr_type, "mnt_list"): mount_ptr_type, "mnt_list"):
devname = vfs['mnt_devname'].string() devname = mnt['mnt_devname'].string()
devname = devname if devname else "none" devname = devname if devname else "none"
pathname = "" pathname = ""
parent = vfs parent = mnt
while True: while True:
mntpoint = parent['mnt_mountpoint'] mntpoint = parent['mnt_mountpoint']
pathname = utils.dentry_name(mntpoint) + pathname pathname = vfs.dentry_name(mntpoint) + pathname
if (parent == parent['mnt_parent']): if (parent == parent['mnt_parent']):
break break
parent = parent['mnt_parent'] parent = parent['mnt_parent']
@ -187,14 +189,14 @@ values of that process namespace"""
if (pathname == ""): if (pathname == ""):
pathname = "/" pathname = "/"
superblock = vfs['mnt']['mnt_sb'] superblock = mnt['mnt']['mnt_sb']
fstype = superblock['s_type']['name'].string() fstype = superblock['s_type']['name'].string()
s_flags = int(superblock['s_flags']) s_flags = int(superblock['s_flags'])
m_flags = int(vfs['mnt']['mnt_flags']) m_flags = int(mnt['mnt']['mnt_flags'])
rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw" rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw"
gdb.write("{} {} {} {} {} {}{}{} 0 0\n".format( gdb.write("{} {} {} {} {} {}{}{} 0 0\n".format(
vfs.format_string(), superblock.format_string(), devname, mnt.format_string(), superblock.format_string(), devname,
pathname, fstype, rd, info_opts(FS_INFO, s_flags), pathname, fstype, rd, info_opts(FS_INFO, s_flags),
info_opts(MNT_INFO, m_flags))) info_opts(MNT_INFO, m_flags)))

View File

@ -0,0 +1,90 @@
# SPDX-License-Identifier: GPL-2.0
#
# Radix Tree Parser
#
# Copyright (c) 2016 Linaro Ltd
# Copyright (c) 2023 Broadcom
#
# Authors:
# Kieran Bingham <kieran.bingham@linaro.org>
# Florian Fainelli <f.fainelli@gmail.com>
import gdb
from linux import utils
from linux import constants
radix_tree_root_type = utils.CachedType("struct xarray")
radix_tree_node_type = utils.CachedType("struct xa_node")
def is_internal_node(node):
long_type = utils.get_long_type()
return ((node.cast(long_type) & constants.LX_RADIX_TREE_ENTRY_MASK) == constants.LX_RADIX_TREE_INTERNAL_NODE)
def entry_to_node(node):
long_type = utils.get_long_type()
node_type = node.type
indirect_ptr = node.cast(long_type) & ~constants.LX_RADIX_TREE_INTERNAL_NODE
return indirect_ptr.cast(radix_tree_node_type.get_type().pointer())
def node_maxindex(node):
return (constants.LX_RADIX_TREE_MAP_SIZE << node['shift']) - 1
def lookup(root, index):
if root.type == radix_tree_root_type.get_type().pointer():
node = root.dereference()
elif root.type != radix_tree_root_type.get_type():
raise gdb.GdbError("must be {} not {}"
.format(radix_tree_root_type.get_type(), root.type))
node = root['xa_head']
if node == 0:
return None
if not (is_internal_node(node)):
if (index > 0):
return None
return node
node = entry_to_node(node)
maxindex = node_maxindex(node)
if (index > maxindex):
return None
shift = node['shift'] + constants.LX_RADIX_TREE_MAP_SHIFT
while True:
offset = (index >> node['shift']) & constants.LX_RADIX_TREE_MAP_MASK
slot = node['slots'][offset]
if slot == 0:
return None
node = slot.cast(node.type.pointer()).dereference()
if node == 0:
return None
shift -= constants.LX_RADIX_TREE_MAP_SHIFT
if (shift <= 0):
break
return node
class LxRadixTree(gdb.Function):
""" Lookup and return a node from a RadixTree.
$lx_radix_tree_lookup(root_node [, index]): Return the node at the given index.
If index is omitted, the root node is dereference and returned."""
def __init__(self):
super(LxRadixTree, self).__init__("lx_radix_tree_lookup")
def invoke(self, root, index=0):
result = lookup(root, index)
if result is None:
raise gdb.GdbError("No entry in tree at index {}".format(index))
return result
LxRadixTree()

View File

@ -43,8 +43,7 @@ def print_timer(rb_node, idx):
def print_active_timers(base): def print_active_timers(base):
curr = base['active']['next']['node'] curr = base['active']['rb_root']['rb_leftmost']
curr = curr.address.cast(rbtree.rb_node_type.get_type().pointer())
idx = 0 idx = 0
while curr: while curr:
yield print_timer(curr, idx) yield print_timer(curr, idx)
@ -73,7 +72,7 @@ def print_cpu(hrtimer_bases, cpu, max_clock_bases):
ts = cpus.per_cpu(tick_sched_ptr, cpu) ts = cpus.per_cpu(tick_sched_ptr, cpu)
text = "cpu: {}\n".format(cpu) text = "cpu: {}\n".format(cpu)
for i in xrange(max_clock_bases): for i in range(max_clock_bases):
text += " clock {}:\n".format(i) text += " clock {}:\n".format(i)
text += print_base(cpu_base['clock_base'][i]) text += print_base(cpu_base['clock_base'][i])
@ -158,6 +157,8 @@ def pr_cpumask(mask):
num_bytes = (nr_cpu_ids + 7) / 8 num_bytes = (nr_cpu_ids + 7) / 8
buf = utils.read_memoryview(inf, bits, num_bytes).tobytes() buf = utils.read_memoryview(inf, bits, num_bytes).tobytes()
buf = binascii.b2a_hex(buf) buf = binascii.b2a_hex(buf)
if type(buf) is not str:
buf=buf.decode()
chunks = [] chunks = []
i = num_bytes i = num_bytes
@ -173,7 +174,7 @@ def pr_cpumask(mask):
if 0 < extra <= 4: if 0 < extra <= 4:
chunks[0] = chunks[0][0] # Cut off the first 0 chunks[0] = chunks[0][0] # Cut off the first 0
return "".join(chunks) return "".join(str(chunks))
class LxTimerList(gdb.Command): class LxTimerList(gdb.Command):
@ -187,7 +188,8 @@ class LxTimerList(gdb.Command):
max_clock_bases = gdb.parse_and_eval("HRTIMER_MAX_CLOCK_BASES") max_clock_bases = gdb.parse_and_eval("HRTIMER_MAX_CLOCK_BASES")
text = "Timer List Version: gdb scripts\n" text = "Timer List Version: gdb scripts\n"
text += "HRTIMER_MAX_CLOCK_BASES: {}\n".format(max_clock_bases) text += "HRTIMER_MAX_CLOCK_BASES: {}\n".format(
max_clock_bases.type.fields()[max_clock_bases].enumval)
text += "now at {} nsecs\n".format(ktime_get()) text += "now at {} nsecs\n".format(ktime_get())
for cpu in cpus.each_online_cpu(): for cpu in cpus.each_online_cpu():

View File

@ -88,7 +88,10 @@ def get_target_endianness():
def read_memoryview(inf, start, length): def read_memoryview(inf, start, length):
return memoryview(inf.read_memory(start, length)) m = inf.read_memory(start, length)
if type(m) is memoryview:
return m
return memoryview(m)
def read_u16(buffer, offset): def read_u16(buffer, offset):
@ -193,11 +196,3 @@ def gdb_eval_or_none(expresssion):
return gdb.parse_and_eval(expresssion) return gdb.parse_and_eval(expresssion)
except gdb.error: except gdb.error:
return None return None
def dentry_name(d):
parent = d['d_parent']
if parent == d or parent == 0:
return ""
p = dentry_name(d['d_parent']) + "/"
return p + d['d_iname'].string()

59
scripts/gdb/linux/vfs.py Normal file
View File

@ -0,0 +1,59 @@
#
# gdb helper commands and functions for Linux kernel debugging
#
# VFS tools
#
# Copyright (c) 2023 Glenn Washburn
# Copyright (c) 2016 Linaro Ltd
#
# Authors:
# Glenn Washburn <development@efficientek.com>
# Kieran Bingham <kieran.bingham@linaro.org>
#
# This work is licensed under the terms of the GNU GPL version 2.
#
import gdb
from linux import utils
def dentry_name(d):
parent = d['d_parent']
if parent == d or parent == 0:
return ""
p = dentry_name(d['d_parent']) + "/"
return p + d['d_iname'].string()
class DentryName(gdb.Function):
"""Return string of the full path of a dentry.
$lx_dentry_name(PTR): Given PTR to a dentry struct, return a string
of the full path of the dentry."""
def __init__(self):
super(DentryName, self).__init__("lx_dentry_name")
def invoke(self, dentry_ptr):
return dentry_name(dentry_ptr)
DentryName()
dentry_type = utils.CachedType("struct dentry")
class InodeDentry(gdb.Function):
"""Return dentry pointer for inode.
$lx_i_dentry(PTR): Given PTR to an inode struct, return a pointer to
the associated dentry struct, if there is one."""
def __init__(self):
super(InodeDentry, self).__init__("lx_i_dentry")
def invoke(self, inode_ptr):
d_u = inode_ptr["i_dentry"]["first"]
if d_u == 0:
return ""
return utils.container_of(d_u, dentry_type.get_type().pointer(), "d_u")
InodeDentry()

View File

@ -22,6 +22,10 @@ except:
gdb.write("NOTE: gdb 7.2 or later required for Linux helper scripts to " gdb.write("NOTE: gdb 7.2 or later required for Linux helper scripts to "
"work.\n") "work.\n")
else: else:
import linux.constants
if linux.constants.LX_CONFIG_DEBUG_INFO_REDUCED:
raise gdb.GdbError("Reduced debug information will prevent GDB "
"from having complete types.\n")
import linux.utils import linux.utils
import linux.symbols import linux.symbols
import linux.modules import linux.modules
@ -32,9 +36,11 @@ else:
import linux.lists import linux.lists
import linux.rbtree import linux.rbtree
import linux.proc import linux.proc
import linux.constants
import linux.timerlist import linux.timerlist
import linux.clk import linux.clk
import linux.genpd import linux.genpd
import linux.device import linux.device
import linux.vfs
import linux.mm import linux.mm
import linux.radixtree
import linux.interrupts

View File

@ -291,7 +291,7 @@ fi
if is_enabled CONFIG_KALLSYMS; then if is_enabled CONFIG_KALLSYMS; then
if ! cmp -s System.map ${kallsyms_vmlinux}.syms; then if ! cmp -s System.map ${kallsyms_vmlinux}.syms; then
echo >&2 Inconsistent kallsyms data echo >&2 Inconsistent kallsyms data
echo >&2 Try "make KALLSYMS_EXTRA_PASS=1" as a workaround echo >&2 'Try "make KALLSYMS_EXTRA_PASS=1" as a workaround'
exit 1 exit 1
fi fi
fi fi

View File

@ -829,7 +829,7 @@ static int rt5677_parse_and_load_dsp(struct rt5677_priv *rt5677, const u8 *buf,
if (strncmp(elf_hdr->e_ident, ELFMAG, sizeof(ELFMAG) - 1)) if (strncmp(elf_hdr->e_ident, ELFMAG, sizeof(ELFMAG) - 1))
dev_err(component->dev, "Wrong ELF header prefix\n"); dev_err(component->dev, "Wrong ELF header prefix\n");
if (elf_hdr->e_ehsize != sizeof(Elf32_Ehdr)) if (elf_hdr->e_ehsize != sizeof(Elf32_Ehdr))
dev_err(component->dev, "Wrong Elf header size\n"); dev_err(component->dev, "Wrong ELF header size\n");
if (elf_hdr->e_machine != EM_XTENSA) if (elf_hdr->e_machine != EM_XTENSA)
dev_err(component->dev, "Wrong DSP code file\n"); dev_err(component->dev, "Wrong DSP code file\n");

View File

@ -198,17 +198,19 @@ static void print_delayacct(struct taskstats *t)
printf("\n\nCPU %15s%15s%15s%15s%15s\n" printf("\n\nCPU %15s%15s%15s%15s%15s\n"
" %15llu%15llu%15llu%15llu%15.3fms\n" " %15llu%15llu%15llu%15llu%15.3fms\n"
"IO %15s%15s%15s\n" "IO %15s%15s%15s\n"
" %15llu%15llu%15llums\n" " %15llu%15llu%15.3fms\n"
"SWAP %15s%15s%15s\n" "SWAP %15s%15s%15s\n"
" %15llu%15llu%15llums\n" " %15llu%15llu%15.3fms\n"
"RECLAIM %12s%15s%15s\n" "RECLAIM %12s%15s%15s\n"
" %15llu%15llu%15llums\n" " %15llu%15llu%15.3fms\n"
"THRASHING%12s%15s%15s\n" "THRASHING%12s%15s%15s\n"
" %15llu%15llu%15llums\n" " %15llu%15llu%15.3fms\n"
"COMPACT %12s%15s%15s\n" "COMPACT %12s%15s%15s\n"
" %15llu%15llu%15llums\n" " %15llu%15llu%15.3fms\n"
"WPCOPY %12s%15s%15s\n" "WPCOPY %12s%15s%15s\n"
" %15llu%15llu%15llums\n", " %15llu%15llu%15.3fms\n"
"IRQ %15s%15s%15s\n"
" %15llu%15llu%15.3fms\n",
"count", "real total", "virtual total", "count", "real total", "virtual total",
"delay total", "delay average", "delay total", "delay average",
(unsigned long long)t->cpu_count, (unsigned long long)t->cpu_count,
@ -219,27 +221,31 @@ static void print_delayacct(struct taskstats *t)
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->blkio_count, (unsigned long long)t->blkio_count,
(unsigned long long)t->blkio_delay_total, (unsigned long long)t->blkio_delay_total,
average_ms(t->blkio_delay_total, t->blkio_count), average_ms((double)t->blkio_delay_total, t->blkio_count),
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->swapin_count, (unsigned long long)t->swapin_count,
(unsigned long long)t->swapin_delay_total, (unsigned long long)t->swapin_delay_total,
average_ms(t->swapin_delay_total, t->swapin_count), average_ms((double)t->swapin_delay_total, t->swapin_count),
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->freepages_count, (unsigned long long)t->freepages_count,
(unsigned long long)t->freepages_delay_total, (unsigned long long)t->freepages_delay_total,
average_ms(t->freepages_delay_total, t->freepages_count), average_ms((double)t->freepages_delay_total, t->freepages_count),
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->thrashing_count, (unsigned long long)t->thrashing_count,
(unsigned long long)t->thrashing_delay_total, (unsigned long long)t->thrashing_delay_total,
average_ms(t->thrashing_delay_total, t->thrashing_count), average_ms((double)t->thrashing_delay_total, t->thrashing_count),
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->compact_count, (unsigned long long)t->compact_count,
(unsigned long long)t->compact_delay_total, (unsigned long long)t->compact_delay_total,
average_ms(t->compact_delay_total, t->compact_count), average_ms((double)t->compact_delay_total, t->compact_count),
"count", "delay total", "delay average", "count", "delay total", "delay average",
(unsigned long long)t->wpcopy_count, (unsigned long long)t->wpcopy_count,
(unsigned long long)t->wpcopy_delay_total, (unsigned long long)t->wpcopy_delay_total,
average_ms(t->wpcopy_delay_total, t->wpcopy_count)); average_ms((double)t->wpcopy_delay_total, t->wpcopy_count),
"count", "delay total", "delay average",
(unsigned long long)t->irq_count,
(unsigned long long)t->irq_delay_total,
average_ms((double)t->irq_delay_total, t->irq_count));
} }
static void task_context_switch_counts(struct taskstats *t) static void task_context_switch_counts(struct taskstats *t)

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) // SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
/* /*
* resolve_btfids scans Elf object for .BTF_ids section and resolves * resolve_btfids scans ELF object for .BTF_ids section and resolves
* its symbols with BTF ID values. * its symbols with BTF ID values.
* *
* Each symbol points to 4 bytes data and is expected to have * Each symbol points to 4 bytes data and is expected to have

View File

@ -1361,7 +1361,7 @@ static int bpf_object__elf_init(struct bpf_object *obj)
goto errout; goto errout;
} }
/* Elf is corrupted/truncated, avoid calling elf_strptr. */ /* ELF is corrupted/truncated, avoid calling elf_strptr. */
if (!elf_rawdata(elf_getscn(elf, obj->efile.shstrndx), NULL)) { if (!elf_rawdata(elf_getscn(elf, obj->efile.shstrndx), NULL)) {
pr_warn("elf: failed to get section names strings from %s: %s\n", pr_warn("elf: failed to get section names strings from %s: %s\n",
obj->path, elf_errmsg(-1)); obj->path, elf_errmsg(-1));

View File

@ -771,7 +771,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
target->rel_ip = usdt_rel_ip; target->rel_ip = usdt_rel_ip;
target->sema_off = usdt_sema_off; target->sema_off = usdt_sema_off;
/* notes.args references strings from Elf itself, so they can /* notes.args references strings from ELF itself, so they can
* be referenced safely until elf_end() call * be referenced safely until elf_end() call
*/ */
target->spec_str = note.args; target->spec_str = note.args;

View File

@ -213,7 +213,7 @@ Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep,
Elf_Scn *sec = NULL; Elf_Scn *sec = NULL;
size_t cnt = 1; size_t cnt = 1;
/* Elf is corrupted/truncated, avoid calling elf_strptr. */ /* ELF is corrupted/truncated, avoid calling elf_strptr. */
if (!elf_rawdata(elf_getscn(elf, ep->e_shstrndx), NULL)) if (!elf_rawdata(elf_getscn(elf, ep->e_shstrndx), NULL))
return NULL; return NULL;