License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
#ifndef __MM_KASAN_KASAN_H
|
|
|
|
#define __MM_KASAN_KASAN_H
|
|
|
|
|
|
|
|
#include <linux/kasan.h>
|
arm64: kasan: mte: use a constant kernel GCR_EL1 value
When KASAN_HW_TAGS is selected, KASAN is enabled at boot time, and the
hardware supports MTE, we'll initialize `kernel_gcr_excl` with a value
dependent on KASAN_TAG_MAX. While the resulting value is a constant
which depends on KASAN_TAG_MAX, we have to perform some runtime work to
generate the value, and have to read the value from memory during the
exception entry path. It would be better if we could generate this as a
constant at compile-time, and use it as such directly.
Early in boot within __cpu_setup(), we initialize GCR_EL1 to a safe
value, and later override this with the value required by KASAN. If
CONFIG_KASAN_HW_TAGS is not selected, or if KASAN is disabeld at boot
time, the kernel will not use IRG instructions, and so the initial value
of GCR_EL1 is does not matter to the kernel. Thus, we can instead have
__cpu_setup() initialize GCR_EL1 to a value consistent with
KASAN_TAG_MAX, and avoid the need to re-initialize it during hotplug and
resume form suspend.
This patch makes arem64 use a compile-time constant KERNEL_GCR_EL1
value, which is compatible with KASAN_HW_TAGS when this is selected.
This removes the need to re-initialize GCR_EL1 dynamically, and acts as
an optimization to the entry assembly, which no longer needs to load
this value from memory. The redundant initialization hooks are removed.
In order to do this, KASAN_TAG_MAX needs to be visible outside of the
core KASAN code. To do this, I've moved the KASAN_TAG_* values into
<linux/kasan-tags.h>.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://lore.kernel.org/r/20210714143843.56537-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-14 14:38:42 +00:00
|
|
|
#include <linux/kasan-tags.h>
|
2021-02-26 01:19:21 +00:00
|
|
|
#include <linux/kfence.h>
|
2016-03-25 21:22:08 +00:00
|
|
|
#include <linux/stackdepot.h>
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
#ifdef CONFIG_KASAN_HW_TAGS
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
#include <linux/static_key.h>
|
2021-07-15 04:26:37 +00:00
|
|
|
#include "../slab.h"
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2022-03-25 01:11:47 +00:00
|
|
|
DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
|
2022-03-25 01:11:44 +00:00
|
|
|
DECLARE_STATIC_KEY_TRUE(kasan_flag_stacktrace);
|
2021-10-06 15:47:51 +00:00
|
|
|
|
|
|
|
enum kasan_mode {
|
|
|
|
KASAN_MODE_SYNC,
|
|
|
|
KASAN_MODE_ASYNC,
|
|
|
|
KASAN_MODE_ASYMM,
|
|
|
|
};
|
|
|
|
|
|
|
|
extern enum kasan_mode kasan_mode __ro_after_init;
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2022-03-25 01:11:47 +00:00
|
|
|
static inline bool kasan_vmalloc_enabled(void)
|
|
|
|
{
|
|
|
|
return static_branch_likely(&kasan_flag_vmalloc);
|
|
|
|
}
|
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
static inline bool kasan_stack_collection_enabled(void)
|
|
|
|
{
|
|
|
|
return static_branch_unlikely(&kasan_flag_stacktrace);
|
|
|
|
}
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2021-10-06 15:47:51 +00:00
|
|
|
static inline bool kasan_async_fault_possible(void)
|
|
|
|
{
|
|
|
|
return kasan_mode == KASAN_MODE_ASYNC || kasan_mode == KASAN_MODE_ASYMM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool kasan_sync_fault_possible(void)
|
2021-03-15 13:20:14 +00:00
|
|
|
{
|
2021-10-06 15:47:51 +00:00
|
|
|
return kasan_mode == KASAN_MODE_SYNC || kasan_mode == KASAN_MODE_ASYMM;
|
2021-03-15 13:20:14 +00:00
|
|
|
}
|
2022-05-13 03:23:08 +00:00
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
#else
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
static inline bool kasan_stack_collection_enabled(void)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2021-03-15 13:20:14 +00:00
|
|
|
|
2021-10-06 15:47:51 +00:00
|
|
|
static inline bool kasan_async_fault_possible(void)
|
2021-03-15 13:20:14 +00:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2021-10-06 15:47:51 +00:00
|
|
|
static inline bool kasan_sync_fault_possible(void)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2020-12-22 20:03:06 +00:00
|
|
|
#endif
|
|
|
|
|
2020-12-22 20:01:59 +00:00
|
|
|
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
|
2020-12-22 20:00:24 +00:00
|
|
|
#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
|
2020-12-22 20:01:59 +00:00
|
|
|
#else
|
|
|
|
#include <asm/mte-kasan.h>
|
|
|
|
#define KASAN_GRANULE_SIZE MTE_GRANULE_SIZE
|
|
|
|
#endif
|
|
|
|
|
2020-12-22 20:00:24 +00:00
|
|
|
#define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1)
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
|
2020-12-22 20:00:35 +00:00
|
|
|
#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT)
|
|
|
|
|
2018-12-28 08:30:50 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_PAGE_FREE 0xFF /* freed page */
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocation */
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_SLAB_REDZONE 0xFC /* redzone for slab object */
|
|
|
|
#define KASAN_SLAB_FREE 0xFB /* freed slab object */
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_VMALLOC_INVALID 0xF8 /* inaccessible space in vmap area */
|
2018-12-28 08:30:50 +00:00
|
|
|
#else
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_PAGE_FREE KASAN_TAG_INVALID
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_PAGE_REDZONE KASAN_TAG_INVALID
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_SLAB_REDZONE KASAN_TAG_INVALID
|
|
|
|
#define KASAN_SLAB_FREE KASAN_TAG_INVALID
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_VMALLOC_INVALID KASAN_TAG_INVALID /* only used for SW_TAGS */
|
2018-12-28 08:30:50 +00:00
|
|
|
#endif
|
|
|
|
|
2022-03-25 01:10:46 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_SLAB_FREETRACK 0xFA /* freed slab object with free track */
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_GLOBAL_REDZONE 0xF9 /* redzone for global variable */
|
2015-02-13 22:39:42 +00:00
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Stack redzone shadow values. Compiler ABI, do not change. */
|
2022-05-13 03:23:08 +00:00
|
|
|
#define KASAN_STACK_LEFT 0xF1
|
|
|
|
#define KASAN_STACK_MID 0xF2
|
|
|
|
#define KASAN_STACK_RIGHT 0xF3
|
|
|
|
#define KASAN_STACK_PARTIAL 0xF4
|
2015-02-13 22:39:59 +00:00
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* alloca redzone shadow values. */
|
2018-02-06 23:36:11 +00:00
|
|
|
#define KASAN_ALLOCA_LEFT 0xCA
|
|
|
|
#define KASAN_ALLOCA_RIGHT 0xCB
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* alloca redzone size. Compiler ABI, do not change. */
|
2018-02-06 23:36:11 +00:00
|
|
|
#define KASAN_ALLOCA_REDZONE_SIZE 32
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Stack frame marker. Compiler ABI, do not change. */
|
2019-07-12 03:53:49 +00:00
|
|
|
#define KASAN_CURRENT_STACK_FRAME_MAGIC 0x41B58AB3
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Dummy value to avoid breaking randconfig/all*config builds. */
|
2015-02-13 22:40:17 +00:00
|
|
|
#ifndef KASAN_ABI_VERSION
|
|
|
|
#define KASAN_ABI_VERSION 1
|
|
|
|
#endif
|
2015-02-13 22:39:28 +00:00
|
|
|
|
2022-03-25 01:10:46 +00:00
|
|
|
#endif /* CONFIG_KASAN_GENERIC */
|
|
|
|
|
2020-12-22 20:01:17 +00:00
|
|
|
/* Metadata layout customization. */
|
|
|
|
#define META_BYTES_PER_BLOCK 1
|
|
|
|
#define META_BLOCKS_PER_ROW 16
|
|
|
|
#define META_BYTES_PER_ROW (META_BLOCKS_PER_ROW * META_BYTES_PER_BLOCK)
|
|
|
|
#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE)
|
|
|
|
#define META_ROWS_AROUND_ADDR 2
|
|
|
|
|
2022-03-25 01:12:55 +00:00
|
|
|
enum kasan_report_type {
|
|
|
|
KASAN_REPORT_ACCESS,
|
|
|
|
KASAN_REPORT_INVALID_FREE,
|
2022-06-15 06:22:18 +00:00
|
|
|
KASAN_REPORT_DOUBLE_FREE,
|
2022-03-25 01:12:55 +00:00
|
|
|
};
|
|
|
|
|
2022-03-25 01:13:00 +00:00
|
|
|
struct kasan_report_info {
|
2022-03-25 01:12:55 +00:00
|
|
|
enum kasan_report_type type;
|
2022-03-25 01:12:43 +00:00
|
|
|
void *access_addr;
|
|
|
|
void *first_bad_addr;
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
size_t access_size;
|
|
|
|
bool is_write;
|
|
|
|
unsigned long ip;
|
|
|
|
};
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Do not change the struct layout: compiler ABI. */
|
2015-02-13 22:40:17 +00:00
|
|
|
struct kasan_source_location {
|
|
|
|
const char *filename;
|
|
|
|
int line_no;
|
|
|
|
int column_no;
|
|
|
|
};
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Do not change the struct layout: compiler ABI. */
|
2015-02-13 22:40:17 +00:00
|
|
|
struct kasan_global {
|
|
|
|
const void *beg; /* Address of the beginning of the global variable. */
|
|
|
|
size_t size; /* Size of the global variable. */
|
2022-05-13 03:23:08 +00:00
|
|
|
size_t size_with_redzone; /* Size of the variable + size of the redzone. 32 bytes aligned. */
|
2015-02-13 22:40:17 +00:00
|
|
|
const void *name;
|
|
|
|
const void *module_name; /* Name of the module where the global variable is declared. */
|
2022-05-13 03:23:08 +00:00
|
|
|
unsigned long has_dynamic_init; /* This is needed for C++. */
|
2015-02-13 22:40:17 +00:00
|
|
|
#if KASAN_ABI_VERSION >= 4
|
|
|
|
struct kasan_source_location *location;
|
|
|
|
#endif
|
2016-11-30 23:54:13 +00:00
|
|
|
#if KASAN_ABI_VERSION >= 5
|
|
|
|
char *odr_indicator;
|
|
|
|
#endif
|
2015-02-13 22:40:17 +00:00
|
|
|
};
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Structures for keeping alloc and free tracks. */
|
2016-03-25 21:21:59 +00:00
|
|
|
|
2016-03-25 21:22:08 +00:00
|
|
|
#define KASAN_STACK_DEPTH 64
|
|
|
|
|
2016-03-25 21:21:59 +00:00
|
|
|
struct kasan_track {
|
2016-03-25 21:22:08 +00:00
|
|
|
u32 pid;
|
|
|
|
depot_stack_handle_t stack;
|
2016-03-25 21:21:59 +00:00
|
|
|
};
|
|
|
|
|
2021-06-29 02:40:58 +00:00
|
|
|
#if defined(CONFIG_KASAN_TAGS_IDENTIFY) && defined(CONFIG_KASAN_SW_TAGS)
|
2019-09-23 22:34:13 +00:00
|
|
|
#define KASAN_NR_FREE_STACKS 5
|
|
|
|
#else
|
|
|
|
#define KASAN_NR_FREE_STACKS 1
|
|
|
|
#endif
|
|
|
|
|
2016-03-25 21:21:59 +00:00
|
|
|
struct kasan_alloc_meta {
|
2016-08-02 21:02:52 +00:00
|
|
|
struct kasan_track alloc_track;
|
2022-05-13 03:23:08 +00:00
|
|
|
/* Generic mode stores free track in kasan_free_meta. */
|
2020-08-07 06:24:35 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
|
|
|
depot_stack_handle_t aux_stack[2];
|
2020-08-07 06:24:39 +00:00
|
|
|
#else
|
2019-09-23 22:34:13 +00:00
|
|
|
struct kasan_track free_track[KASAN_NR_FREE_STACKS];
|
2020-08-07 06:24:39 +00:00
|
|
|
#endif
|
2021-06-29 02:40:52 +00:00
|
|
|
#ifdef CONFIG_KASAN_TAGS_IDENTIFY
|
2019-09-23 22:34:13 +00:00
|
|
|
u8 free_pointer_tag[KASAN_NR_FREE_STACKS];
|
|
|
|
u8 free_track_idx;
|
|
|
|
#endif
|
2016-03-25 21:21:59 +00:00
|
|
|
};
|
|
|
|
|
mm: kasan: initial memory quarantine implementation
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
When the object is freed, its state changes from KASAN_STATE_ALLOC to
KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
instead of being returned to the allocator, therefore every subsequent
access to that object triggers a KASAN error, and the error handler is
able to say where the object has been allocated and deallocated.
When it's time for the object to leave quarantine, its state becomes
KASAN_STATE_FREE and it's returned to the allocator. From now on the
allocator may reuse it for another allocation. Before that happens,
it's still possible to detect a use-after free on that object (it
retains the allocation/deallocation stacks).
When the allocator reuses this object, the shadow is unpoisoned and old
allocation/deallocation stacks are wiped. Therefore a use of this
object, even an incorrect one, won't trigger ASan warning.
Without the quarantine, it's not guaranteed that the objects aren't
reused immediately, that's why the probability of catching a
use-after-free is lower than with quarantine in place.
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
Freed objects are first added to per-cpu quarantine queues. When a
cache is destroyed or memory shrinking is requested, the objects are
moved into the global quarantine queue. Whenever a kmalloc call allows
memory reclaiming, the oldest objects are popped out of the global queue
until the total size of objects in quarantine is less than 3/4 of the
maximum quarantine size (which is a fraction of installed physical
memory).
As long as an object remains in the quarantine, KASAN is able to report
accesses to it, so the chance of reporting a use-after-free is
increased. Once the object leaves quarantine, the allocator may reuse
it, in which case the object is unpoisoned and KASAN can't detect
incorrect accesses to it.
Right now quarantine support is only enabled in SLAB allocator.
Unification of KASAN features in SLAB and SLUB will be done later.
This patch is based on the "mm: kasan: quarantine" patch originally
prepared by Dmitry Chernenkov. A number of improvements have been
suggested by Andrey Ryabinin.
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 23:59:11 +00:00
|
|
|
struct qlist_node {
|
|
|
|
struct qlist_node *next;
|
|
|
|
};
|
2020-12-22 20:03:28 +00:00
|
|
|
|
|
|
|
/*
|
2022-05-13 03:23:08 +00:00
|
|
|
* Free meta is stored either in the object itself or in the redzone after the
|
|
|
|
* object. In the former case, free meta offset is 0. In the latter case, the
|
|
|
|
* offset is between 0 and INT_MAX. INT_MAX marks that free meta is not present.
|
2020-12-22 20:03:28 +00:00
|
|
|
*/
|
|
|
|
#define KASAN_NO_FREE_META INT_MAX
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
/*
|
|
|
|
* Free meta is only used by Generic mode while the object is in quarantine.
|
|
|
|
* After that, slab allocator stores the freelist pointer in the object.
|
|
|
|
*/
|
2016-03-25 21:21:59 +00:00
|
|
|
struct kasan_free_meta {
|
2020-12-22 20:03:28 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
mm: kasan: initial memory quarantine implementation
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
When the object is freed, its state changes from KASAN_STATE_ALLOC to
KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
instead of being returned to the allocator, therefore every subsequent
access to that object triggers a KASAN error, and the error handler is
able to say where the object has been allocated and deallocated.
When it's time for the object to leave quarantine, its state becomes
KASAN_STATE_FREE and it's returned to the allocator. From now on the
allocator may reuse it for another allocation. Before that happens,
it's still possible to detect a use-after free on that object (it
retains the allocation/deallocation stacks).
When the allocator reuses this object, the shadow is unpoisoned and old
allocation/deallocation stacks are wiped. Therefore a use of this
object, even an incorrect one, won't trigger ASan warning.
Without the quarantine, it's not guaranteed that the objects aren't
reused immediately, that's why the probability of catching a
use-after-free is lower than with quarantine in place.
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
Freed objects are first added to per-cpu quarantine queues. When a
cache is destroyed or memory shrinking is requested, the objects are
moved into the global quarantine queue. Whenever a kmalloc call allows
memory reclaiming, the oldest objects are popped out of the global queue
until the total size of objects in quarantine is less than 3/4 of the
maximum quarantine size (which is a fraction of installed physical
memory).
As long as an object remains in the quarantine, KASAN is able to report
accesses to it, so the chance of reporting a use-after-free is
increased. Once the object leaves quarantine, the allocator may reuse
it, in which case the object is unpoisoned and KASAN can't detect
incorrect accesses to it.
Right now quarantine support is only enabled in SLAB allocator.
Unification of KASAN features in SLAB and SLUB will be done later.
This patch is based on the "mm: kasan: quarantine" patch originally
prepared by Dmitry Chernenkov. A number of improvements have been
suggested by Andrey Ryabinin.
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 23:59:11 +00:00
|
|
|
struct qlist_node quarantine_link;
|
2020-08-07 06:24:39 +00:00
|
|
|
struct kasan_track free_track;
|
|
|
|
#endif
|
2016-03-25 21:21:59 +00:00
|
|
|
};
|
|
|
|
|
2022-03-25 01:12:34 +00:00
|
|
|
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
|
|
|
|
/* Used in KUnit-compatible KASAN tests. */
|
|
|
|
struct kunit_kasan_status {
|
|
|
|
bool report_found;
|
|
|
|
bool sync_fault;
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2020-12-22 20:02:34 +00:00
|
|
|
struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
|
|
|
|
const void *object);
|
2020-12-22 20:03:28 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
2020-12-22 20:02:34 +00:00
|
|
|
struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
|
|
|
|
const void *object);
|
2020-12-22 20:03:28 +00:00
|
|
|
#endif
|
2016-03-25 21:21:59 +00:00
|
|
|
|
2020-12-22 20:02:10 +00:00
|
|
|
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
|
|
|
|
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
|
|
|
|
{
|
|
|
|
return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
|
|
|
|
<< KASAN_SHADOW_SCALE_SHIFT);
|
|
|
|
}
|
|
|
|
|
2020-12-22 20:01:07 +00:00
|
|
|
static inline bool addr_has_metadata(const void *addr)
|
2018-12-28 08:30:38 +00:00
|
|
|
{
|
2022-03-25 01:12:43 +00:00
|
|
|
return (kasan_reset_tag(addr) >=
|
|
|
|
kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
|
2018-12-28 08:30:38 +00:00
|
|
|
}
|
|
|
|
|
2019-07-12 03:54:07 +00:00
|
|
|
/**
|
2021-02-24 20:05:05 +00:00
|
|
|
* kasan_check_range - Check memory region, and report if invalid access.
|
2019-07-12 03:54:07 +00:00
|
|
|
* @addr: the accessed address
|
|
|
|
* @size: the accessed size
|
|
|
|
* @write: true if access is a write access
|
|
|
|
* @ret_ip: return address
|
|
|
|
* @return: true if access was valid, false if invalid
|
|
|
|
*/
|
2021-02-24 20:05:05 +00:00
|
|
|
bool kasan_check_range(unsigned long addr, size_t size, bool write,
|
2018-12-28 08:29:45 +00:00
|
|
|
unsigned long ret_ip);
|
|
|
|
|
2020-12-22 20:02:10 +00:00
|
|
|
#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
|
|
|
|
|
|
|
|
static inline bool addr_has_metadata(const void *addr)
|
|
|
|
{
|
2021-02-05 02:32:53 +00:00
|
|
|
return (is_vmalloc_addr(addr) || virt_addr_valid(addr));
|
2020-12-22 20:02:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
|
|
|
|
|
2020-12-22 20:02:56 +00:00
|
|
|
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
|
2021-02-24 20:05:05 +00:00
|
|
|
void kasan_print_tags(u8 addr_tag, const void *addr);
|
2020-12-22 20:02:56 +00:00
|
|
|
#else
|
2021-02-24 20:05:05 +00:00
|
|
|
static inline void kasan_print_tags(u8 addr_tag, const void *addr) { }
|
2020-12-22 20:02:56 +00:00
|
|
|
#endif
|
|
|
|
|
2021-02-24 20:05:05 +00:00
|
|
|
void *kasan_find_first_bad_addr(void *addr, size_t size);
|
2022-03-25 01:13:00 +00:00
|
|
|
const char *kasan_get_bug_type(struct kasan_report_info *info);
|
2021-02-24 20:05:05 +00:00
|
|
|
void kasan_metadata_fetch_row(char *buffer, void *row);
|
2018-12-28 08:30:38 +00:00
|
|
|
|
2022-03-25 01:12:25 +00:00
|
|
|
#if defined(CONFIG_KASAN_STACK)
|
2021-02-24 20:05:05 +00:00
|
|
|
void kasan_print_address_stack_frame(const void *addr);
|
2020-12-22 20:00:49 +00:00
|
|
|
#else
|
2021-02-24 20:05:05 +00:00
|
|
|
static inline void kasan_print_address_stack_frame(const void *addr) { }
|
2020-12-22 20:00:49 +00:00
|
|
|
#endif
|
|
|
|
|
2020-04-02 04:09:37 +00:00
|
|
|
bool kasan_report(unsigned long addr, size_t size,
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
bool is_write, unsigned long ip);
|
2022-06-15 06:22:18 +00:00
|
|
|
void kasan_report_invalid_free(void *object, unsigned long ip, enum kasan_report_type type);
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 22:39:17 +00:00
|
|
|
|
2019-09-23 22:34:13 +00:00
|
|
|
struct page *kasan_addr_to_page(const void *addr);
|
2021-10-04 13:46:46 +00:00
|
|
|
struct slab *kasan_addr_to_slab(const void *addr);
|
2019-09-23 22:34:13 +00:00
|
|
|
|
2021-11-05 20:35:43 +00:00
|
|
|
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
|
2020-08-07 06:24:39 +00:00
|
|
|
void kasan_set_track(struct kasan_track *track, gfp_t flags);
|
|
|
|
void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag);
|
|
|
|
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
|
|
|
|
void *object, u8 tag);
|
2020-08-07 06:24:35 +00:00
|
|
|
|
2018-12-28 08:29:53 +00:00
|
|
|
#if defined(CONFIG_KASAN_GENERIC) && \
|
|
|
|
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
|
2021-02-24 20:05:05 +00:00
|
|
|
bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
|
|
|
|
void kasan_quarantine_reduce(void);
|
|
|
|
void kasan_quarantine_remove_cache(struct kmem_cache *cache);
|
mm: kasan: initial memory quarantine implementation
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
When the object is freed, its state changes from KASAN_STATE_ALLOC to
KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
instead of being returned to the allocator, therefore every subsequent
access to that object triggers a KASAN error, and the error handler is
able to say where the object has been allocated and deallocated.
When it's time for the object to leave quarantine, its state becomes
KASAN_STATE_FREE and it's returned to the allocator. From now on the
allocator may reuse it for another allocation. Before that happens,
it's still possible to detect a use-after free on that object (it
retains the allocation/deallocation stacks).
When the allocator reuses this object, the shadow is unpoisoned and old
allocation/deallocation stacks are wiped. Therefore a use of this
object, even an incorrect one, won't trigger ASan warning.
Without the quarantine, it's not guaranteed that the objects aren't
reused immediately, that's why the probability of catching a
use-after-free is lower than with quarantine in place.
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
Freed objects are first added to per-cpu quarantine queues. When a
cache is destroyed or memory shrinking is requested, the objects are
moved into the global quarantine queue. Whenever a kmalloc call allows
memory reclaiming, the oldest objects are popped out of the global queue
until the total size of objects in quarantine is less than 3/4 of the
maximum quarantine size (which is a fraction of installed physical
memory).
As long as an object remains in the quarantine, KASAN is able to report
accesses to it, so the chance of reporting a use-after-free is
increased. Once the object leaves quarantine, the allocator may reuse
it, in which case the object is unpoisoned and KASAN can't detect
incorrect accesses to it.
Right now quarantine support is only enabled in SLAB allocator.
Unification of KASAN features in SLAB and SLUB will be done later.
This patch is based on the "mm: kasan: quarantine" patch originally
prepared by Dmitry Chernenkov. A number of improvements have been
suggested by Andrey Ryabinin.
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 23:59:11 +00:00
|
|
|
#else
|
2021-02-24 20:05:05 +00:00
|
|
|
static inline bool kasan_quarantine_put(struct kmem_cache *cache, void *object) { return false; }
|
|
|
|
static inline void kasan_quarantine_reduce(void) { }
|
|
|
|
static inline void kasan_quarantine_remove_cache(struct kmem_cache *cache) { }
|
mm: kasan: initial memory quarantine implementation
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
When the object is freed, its state changes from KASAN_STATE_ALLOC to
KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
instead of being returned to the allocator, therefore every subsequent
access to that object triggers a KASAN error, and the error handler is
able to say where the object has been allocated and deallocated.
When it's time for the object to leave quarantine, its state becomes
KASAN_STATE_FREE and it's returned to the allocator. From now on the
allocator may reuse it for another allocation. Before that happens,
it's still possible to detect a use-after free on that object (it
retains the allocation/deallocation stacks).
When the allocator reuses this object, the shadow is unpoisoned and old
allocation/deallocation stacks are wiped. Therefore a use of this
object, even an incorrect one, won't trigger ASan warning.
Without the quarantine, it's not guaranteed that the objects aren't
reused immediately, that's why the probability of catching a
use-after-free is lower than with quarantine in place.
Quarantine isolates freed objects in a separate queue. The objects are
returned to the allocator later, which helps to detect use-after-free
errors.
Freed objects are first added to per-cpu quarantine queues. When a
cache is destroyed or memory shrinking is requested, the objects are
moved into the global quarantine queue. Whenever a kmalloc call allows
memory reclaiming, the oldest objects are popped out of the global queue
until the total size of objects in quarantine is less than 3/4 of the
maximum quarantine size (which is a fraction of installed physical
memory).
As long as an object remains in the quarantine, KASAN is able to report
accesses to it, so the chance of reporting a use-after-free is
increased. Once the object leaves quarantine, the allocator may reuse
it, in which case the object is unpoisoned and KASAN can't detect
incorrect accesses to it.
Right now quarantine support is only enabled in SLAB allocator.
Unification of KASAN features in SLAB and SLUB will be done later.
This patch is based on the "mm: kasan: quarantine" patch originally
prepared by Dmitry Chernenkov. A number of improvements have been
suggested by Andrey Ryabinin.
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 23:59:11 +00:00
|
|
|
#endif
|
|
|
|
|
2018-12-28 08:30:16 +00:00
|
|
|
#ifndef arch_kasan_set_tag
|
2019-03-29 03:43:15 +00:00
|
|
|
static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
|
|
|
|
{
|
|
|
|
return addr;
|
|
|
|
}
|
2018-12-28 08:30:16 +00:00
|
|
|
#endif
|
|
|
|
#ifndef arch_kasan_get_tag
|
|
|
|
#define arch_kasan_get_tag(addr) 0
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag)))
|
|
|
|
#define get_tag(addr) arch_kasan_get_tag(addr)
|
|
|
|
|
2020-12-22 20:01:56 +00:00
|
|
|
#ifdef CONFIG_KASAN_HW_TAGS
|
|
|
|
|
2021-03-15 13:20:12 +00:00
|
|
|
#ifndef arch_enable_tagging_sync
|
|
|
|
#define arch_enable_tagging_sync()
|
|
|
|
#endif
|
|
|
|
#ifndef arch_enable_tagging_async
|
|
|
|
#define arch_enable_tagging_async()
|
2020-12-22 20:01:56 +00:00
|
|
|
#endif
|
2021-10-06 15:47:51 +00:00
|
|
|
#ifndef arch_enable_tagging_asymm
|
|
|
|
#define arch_enable_tagging_asymm()
|
|
|
|
#endif
|
2021-03-15 13:20:19 +00:00
|
|
|
#ifndef arch_force_async_tag_fault
|
|
|
|
#define arch_force_async_tag_fault()
|
|
|
|
#endif
|
2020-12-22 20:01:56 +00:00
|
|
|
#ifndef arch_get_random_tag
|
|
|
|
#define arch_get_random_tag() (0xFF)
|
|
|
|
#endif
|
|
|
|
#ifndef arch_get_mem_tag
|
|
|
|
#define arch_get_mem_tag(addr) (0xFF)
|
|
|
|
#endif
|
|
|
|
#ifndef arch_set_mem_tag_range
|
2021-04-30 05:59:55 +00:00
|
|
|
#define arch_set_mem_tag_range(addr, size, tag, init) ((void *)(addr))
|
2020-12-22 20:01:56 +00:00
|
|
|
#endif
|
|
|
|
|
2021-03-15 13:20:12 +00:00
|
|
|
#define hw_enable_tagging_sync() arch_enable_tagging_sync()
|
|
|
|
#define hw_enable_tagging_async() arch_enable_tagging_async()
|
2021-10-06 15:47:51 +00:00
|
|
|
#define hw_enable_tagging_asymm() arch_enable_tagging_asymm()
|
2021-03-15 13:20:19 +00:00
|
|
|
#define hw_force_async_tag_fault() arch_force_async_tag_fault()
|
2020-12-22 20:01:56 +00:00
|
|
|
#define hw_get_random_tag() arch_get_random_tag()
|
|
|
|
#define hw_get_mem_tag(addr) arch_get_mem_tag(addr)
|
2021-04-30 05:59:55 +00:00
|
|
|
#define hw_set_mem_tag_range(addr, size, tag, init) \
|
|
|
|
arch_set_mem_tag_range((addr), (size), (tag), (init))
|
2020-12-22 20:01:56 +00:00
|
|
|
|
2022-04-15 02:13:37 +00:00
|
|
|
void kasan_enable_tagging(void);
|
|
|
|
|
2021-02-24 20:05:26 +00:00
|
|
|
#else /* CONFIG_KASAN_HW_TAGS */
|
|
|
|
|
2021-03-15 13:20:12 +00:00
|
|
|
#define hw_enable_tagging_sync()
|
|
|
|
#define hw_enable_tagging_async()
|
2021-10-06 15:47:51 +00:00
|
|
|
#define hw_enable_tagging_asymm()
|
2021-02-24 20:05:26 +00:00
|
|
|
|
2022-04-15 02:13:37 +00:00
|
|
|
static inline void kasan_enable_tagging(void) { }
|
|
|
|
|
2020-12-22 20:01:56 +00:00
|
|
|
#endif /* CONFIG_KASAN_HW_TAGS */
|
|
|
|
|
2021-02-24 20:05:26 +00:00
|
|
|
#if defined(CONFIG_KASAN_HW_TAGS) && IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
|
|
|
|
|
2021-03-15 13:20:19 +00:00
|
|
|
void kasan_force_async_fault(void);
|
2021-02-24 20:05:26 +00:00
|
|
|
|
2022-04-15 02:13:37 +00:00
|
|
|
#else /* CONFIG_KASAN_HW_TAGS && CONFIG_KASAN_KUNIT_TEST */
|
2021-02-24 20:05:26 +00:00
|
|
|
|
2021-03-15 13:20:19 +00:00
|
|
|
static inline void kasan_force_async_fault(void) { }
|
2021-02-24 20:05:26 +00:00
|
|
|
|
2022-04-15 02:13:37 +00:00
|
|
|
#endif /* CONFIG_KASAN_HW_TAGS && CONFIG_KASAN_KUNIT_TEST */
|
2021-02-24 20:05:26 +00:00
|
|
|
|
2020-12-22 20:02:56 +00:00
|
|
|
#ifdef CONFIG_KASAN_SW_TAGS
|
2021-02-24 20:05:05 +00:00
|
|
|
u8 kasan_random_tag(void);
|
2020-12-22 20:02:56 +00:00
|
|
|
#elif defined(CONFIG_KASAN_HW_TAGS)
|
2021-02-24 20:05:05 +00:00
|
|
|
static inline u8 kasan_random_tag(void) { return hw_get_random_tag(); }
|
2020-12-22 20:02:56 +00:00
|
|
|
#else
|
2021-02-24 20:05:05 +00:00
|
|
|
static inline u8 kasan_random_tag(void) { return 0; }
|
2020-12-22 20:02:56 +00:00
|
|
|
#endif
|
|
|
|
|
2020-12-22 20:03:03 +00:00
|
|
|
#ifdef CONFIG_KASAN_HW_TAGS
|
|
|
|
|
2021-04-30 05:59:59 +00:00
|
|
|
static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init)
|
2020-12-22 20:03:03 +00:00
|
|
|
{
|
2021-02-26 01:20:27 +00:00
|
|
|
addr = kasan_reset_tag(addr);
|
2021-02-26 01:19:21 +00:00
|
|
|
|
|
|
|
/* Skip KFENCE memory if called explicitly outside of sl*b. */
|
2021-02-26 01:20:27 +00:00
|
|
|
if (is_kfence_address(addr))
|
2021-02-26 01:19:21 +00:00
|
|
|
return;
|
|
|
|
|
2021-02-26 01:20:27 +00:00
|
|
|
if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
|
|
|
|
return;
|
|
|
|
if (WARN_ON(size & KASAN_GRANULE_MASK))
|
|
|
|
return;
|
|
|
|
|
2021-04-30 05:59:59 +00:00
|
|
|
hw_set_mem_tag_range((void *)addr, size, value, init);
|
2020-12-22 20:03:03 +00:00
|
|
|
}
|
|
|
|
|
2021-04-30 05:59:59 +00:00
|
|
|
static inline void kasan_unpoison(const void *addr, size_t size, bool init)
|
2020-12-22 20:03:03 +00:00
|
|
|
{
|
2021-02-26 01:20:27 +00:00
|
|
|
u8 tag = get_tag(addr);
|
2021-02-26 01:19:21 +00:00
|
|
|
|
2021-02-26 01:20:27 +00:00
|
|
|
addr = kasan_reset_tag(addr);
|
2021-02-26 01:19:21 +00:00
|
|
|
|
|
|
|
/* Skip KFENCE memory if called explicitly outside of sl*b. */
|
2021-02-26 01:20:27 +00:00
|
|
|
if (is_kfence_address(addr))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
|
2021-02-26 01:19:21 +00:00
|
|
|
return;
|
2021-07-15 04:26:37 +00:00
|
|
|
/*
|
|
|
|
* Explicitly initialize the memory with the precise object size to
|
2022-05-13 03:23:08 +00:00
|
|
|
* avoid overwriting the slab redzone. This disables initialization in
|
|
|
|
* the arch code and may thus lead to performance penalty. This penalty
|
|
|
|
* does not affect production builds, as slab redzones are not enabled
|
|
|
|
* there.
|
2021-07-15 04:26:37 +00:00
|
|
|
*/
|
|
|
|
if (__slub_debug_enabled() &&
|
|
|
|
init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
|
|
|
|
init = false;
|
|
|
|
memzero_explicit((void *)addr, size);
|
|
|
|
}
|
2021-02-26 01:20:27 +00:00
|
|
|
size = round_up(size, KASAN_GRANULE_SIZE);
|
2021-02-26 01:19:21 +00:00
|
|
|
|
2021-04-30 05:59:59 +00:00
|
|
|
hw_set_mem_tag_range((void *)addr, size, tag, init);
|
2020-12-22 20:03:03 +00:00
|
|
|
}
|
|
|
|
|
2021-02-24 20:05:50 +00:00
|
|
|
static inline bool kasan_byte_accessible(const void *addr)
|
2020-12-22 20:03:03 +00:00
|
|
|
{
|
|
|
|
u8 ptr_tag = get_tag(addr);
|
2021-02-24 20:05:50 +00:00
|
|
|
u8 mem_tag = hw_get_mem_tag((void *)addr);
|
2020-12-22 20:03:03 +00:00
|
|
|
|
2021-04-30 05:59:46 +00:00
|
|
|
return ptr_tag == KASAN_TAG_KERNEL || ptr_tag == mem_tag;
|
2020-12-22 20:03:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_KASAN_HW_TAGS */
|
|
|
|
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
/**
|
2021-05-07 01:06:47 +00:00
|
|
|
* kasan_poison - mark the memory range as inaccessible
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
|
2021-02-26 01:20:27 +00:00
|
|
|
* @size - range size, must be aligned to KASAN_GRANULE_SIZE
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
* @value - value that's written to metadata for the range
|
2021-04-30 05:59:59 +00:00
|
|
|
* @init - whether to initialize the memory range (only for hardware tag-based)
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
*
|
|
|
|
* The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
|
|
|
|
*/
|
2021-04-30 05:59:59 +00:00
|
|
|
void kasan_poison(const void *addr, size_t size, u8 value, bool init);
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* kasan_unpoison - mark the memory range as accessible
|
|
|
|
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
|
2021-02-26 01:20:27 +00:00
|
|
|
* @size - range size, can be unaligned
|
2021-04-30 05:59:59 +00:00
|
|
|
* @init - whether to initialize the memory range (only for hardware tag-based)
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
*
|
|
|
|
* For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
|
|
|
|
* marking the range.
|
|
|
|
* For the generic mode, the last granule of the memory range gets partially
|
|
|
|
* unpoisoned based on the @size.
|
|
|
|
*/
|
2021-04-30 05:59:59 +00:00
|
|
|
void kasan_unpoison(const void *addr, size_t size, bool init);
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
|
2021-02-24 20:05:50 +00:00
|
|
|
bool kasan_byte_accessible(const void *addr);
|
2020-12-22 20:03:03 +00:00
|
|
|
|
|
|
|
#endif /* CONFIG_KASAN_HW_TAGS */
|
|
|
|
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
#ifdef CONFIG_KASAN_GENERIC
|
|
|
|
|
|
|
|
/**
|
|
|
|
* kasan_poison_last_granule - mark the last granule of the memory range as
|
2021-05-07 01:06:47 +00:00
|
|
|
* inaccessible
|
kasan, mm: optimize kmalloc poisoning
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() only
poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. The
number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Link: https://lkml.kernel.org/r/7e3961cb52be380bc412860332063f5f7ce10d13.1612546384.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-26 01:19:59 +00:00
|
|
|
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
|
|
|
|
* @size - range size
|
|
|
|
*
|
|
|
|
* This function is only available for the generic mode, as it's the only mode
|
|
|
|
* that has partially poisoned memory granules.
|
|
|
|
*/
|
|
|
|
void kasan_poison_last_granule(const void *address, size_t size);
|
|
|
|
|
|
|
|
#else /* CONFIG_KASAN_GENERIC */
|
|
|
|
|
|
|
|
static inline void kasan_poison_last_granule(const void *address, size_t size) { }
|
|
|
|
|
|
|
|
#endif /* CONFIG_KASAN_GENERIC */
|
|
|
|
|
2021-06-29 02:40:42 +00:00
|
|
|
#ifndef kasan_arch_is_ready
|
|
|
|
static inline bool kasan_arch_is_ready(void) { return true; }
|
|
|
|
#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE)
|
|
|
|
#error kasan_arch_is_ready only works in KASAN generic outline mode!
|
|
|
|
#endif
|
|
|
|
|
2022-03-25 01:13:12 +00:00
|
|
|
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) || IS_ENABLED(CONFIG_KASAN_MODULE_TEST)
|
|
|
|
|
|
|
|
bool kasan_save_enable_multi_shot(void);
|
|
|
|
void kasan_restore_multi_shot(bool enabled);
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2018-02-06 23:36:20 +00:00
|
|
|
/*
|
|
|
|
* Exported functions for interfaces called from assembly or from generated
|
2022-05-13 03:23:08 +00:00
|
|
|
* code. Declared here to avoid warnings about missing declarations.
|
2018-02-06 23:36:20 +00:00
|
|
|
*/
|
2022-05-13 03:23:08 +00:00
|
|
|
|
2018-02-06 23:36:20 +00:00
|
|
|
asmlinkage void kasan_unpoison_task_stack_below(const void *watermark);
|
|
|
|
void __asan_register_globals(struct kasan_global *globals, size_t size);
|
|
|
|
void __asan_unregister_globals(struct kasan_global *globals, size_t size);
|
|
|
|
void __asan_handle_no_return(void);
|
|
|
|
void __asan_alloca_poison(unsigned long addr, size_t size);
|
|
|
|
void __asan_allocas_unpoison(const void *stack_top, const void *stack_bottom);
|
|
|
|
|
|
|
|
void __asan_load1(unsigned long addr);
|
|
|
|
void __asan_store1(unsigned long addr);
|
|
|
|
void __asan_load2(unsigned long addr);
|
|
|
|
void __asan_store2(unsigned long addr);
|
|
|
|
void __asan_load4(unsigned long addr);
|
|
|
|
void __asan_store4(unsigned long addr);
|
|
|
|
void __asan_load8(unsigned long addr);
|
|
|
|
void __asan_store8(unsigned long addr);
|
|
|
|
void __asan_load16(unsigned long addr);
|
|
|
|
void __asan_store16(unsigned long addr);
|
2020-05-14 00:50:54 +00:00
|
|
|
void __asan_loadN(unsigned long addr, size_t size);
|
|
|
|
void __asan_storeN(unsigned long addr, size_t size);
|
2018-02-06 23:36:20 +00:00
|
|
|
|
|
|
|
void __asan_load1_noabort(unsigned long addr);
|
|
|
|
void __asan_store1_noabort(unsigned long addr);
|
|
|
|
void __asan_load2_noabort(unsigned long addr);
|
|
|
|
void __asan_store2_noabort(unsigned long addr);
|
|
|
|
void __asan_load4_noabort(unsigned long addr);
|
|
|
|
void __asan_store4_noabort(unsigned long addr);
|
|
|
|
void __asan_load8_noabort(unsigned long addr);
|
|
|
|
void __asan_store8_noabort(unsigned long addr);
|
|
|
|
void __asan_load16_noabort(unsigned long addr);
|
|
|
|
void __asan_store16_noabort(unsigned long addr);
|
2020-05-14 00:50:54 +00:00
|
|
|
void __asan_loadN_noabort(unsigned long addr, size_t size);
|
|
|
|
void __asan_storeN_noabort(unsigned long addr, size_t size);
|
|
|
|
|
|
|
|
void __asan_report_load1_noabort(unsigned long addr);
|
|
|
|
void __asan_report_store1_noabort(unsigned long addr);
|
|
|
|
void __asan_report_load2_noabort(unsigned long addr);
|
|
|
|
void __asan_report_store2_noabort(unsigned long addr);
|
|
|
|
void __asan_report_load4_noabort(unsigned long addr);
|
|
|
|
void __asan_report_store4_noabort(unsigned long addr);
|
|
|
|
void __asan_report_load8_noabort(unsigned long addr);
|
|
|
|
void __asan_report_store8_noabort(unsigned long addr);
|
|
|
|
void __asan_report_load16_noabort(unsigned long addr);
|
|
|
|
void __asan_report_store16_noabort(unsigned long addr);
|
|
|
|
void __asan_report_load_n_noabort(unsigned long addr, size_t size);
|
|
|
|
void __asan_report_store_n_noabort(unsigned long addr, size_t size);
|
2018-02-06 23:36:20 +00:00
|
|
|
|
|
|
|
void __asan_set_shadow_00(const void *addr, size_t size);
|
|
|
|
void __asan_set_shadow_f1(const void *addr, size_t size);
|
|
|
|
void __asan_set_shadow_f2(const void *addr, size_t size);
|
|
|
|
void __asan_set_shadow_f3(const void *addr, size_t size);
|
|
|
|
void __asan_set_shadow_f5(const void *addr, size_t size);
|
|
|
|
void __asan_set_shadow_f8(const void *addr, size_t size);
|
|
|
|
|
2020-05-14 00:50:54 +00:00
|
|
|
void __hwasan_load1_noabort(unsigned long addr);
|
|
|
|
void __hwasan_store1_noabort(unsigned long addr);
|
|
|
|
void __hwasan_load2_noabort(unsigned long addr);
|
|
|
|
void __hwasan_store2_noabort(unsigned long addr);
|
|
|
|
void __hwasan_load4_noabort(unsigned long addr);
|
|
|
|
void __hwasan_store4_noabort(unsigned long addr);
|
|
|
|
void __hwasan_load8_noabort(unsigned long addr);
|
|
|
|
void __hwasan_store8_noabort(unsigned long addr);
|
|
|
|
void __hwasan_load16_noabort(unsigned long addr);
|
|
|
|
void __hwasan_store16_noabort(unsigned long addr);
|
|
|
|
void __hwasan_loadN_noabort(unsigned long addr, size_t size);
|
|
|
|
void __hwasan_storeN_noabort(unsigned long addr, size_t size);
|
|
|
|
|
|
|
|
void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size);
|
|
|
|
|
2022-05-13 03:23:08 +00:00
|
|
|
#endif /* __MM_KASAN_KASAN_H */
|