2016-08-07 21:31:03 +00:00
|
|
|
The Kernel Address Sanitizer (KASAN)
|
|
|
|
====================================
|
|
|
|
|
|
|
|
Overview
|
|
|
|
--------
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
|
|
|
|
find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN
|
|
|
|
(similar to userspace ASan) and software tag-based KASAN (similar to userspace
|
|
|
|
HWASan).
|
2016-08-07 21:31:03 +00:00
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
KASAN uses compile-time instrumentation to insert validity checks before every
|
|
|
|
memory access, and therefore requires a compiler version that supports that.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Generic KASAN is supported in both GCC and Clang. With GCC it requires version
|
2020-10-13 23:47:51 +00:00
|
|
|
8.3.0 or later. Any supported Clang version is compatible, but detection of
|
2020-08-07 06:24:31 +00:00
|
|
|
out-of-bounds accesses for global variables is only supported since Clang 11.
|
2018-12-28 08:31:10 +00:00
|
|
|
|
2020-10-13 23:47:51 +00:00
|
|
|
Tag-based KASAN is only supported in Clang.
|
2018-12-28 08:31:10 +00:00
|
|
|
|
2019-10-28 02:41:01 +00:00
|
|
|
Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and
|
|
|
|
riscv architectures, and tag-based KASAN is supported only for arm64.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
Usage
|
|
|
|
-----
|
|
|
|
|
|
|
|
To enable KASAN configure kernel with::
|
|
|
|
|
|
|
|
CONFIG_KASAN = y
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN) and
|
|
|
|
CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN).
|
|
|
|
|
|
|
|
You also need to choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE.
|
|
|
|
Outline and inline are compiler instrumentation types. The former produces
|
|
|
|
smaller binary while the latter is 1.1 - 2 times faster.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Both KASAN modes work with both SLUB and SLAB memory allocators.
|
2016-08-07 21:31:03 +00:00
|
|
|
For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
|
|
|
|
|
2019-10-14 21:11:44 +00:00
|
|
|
To augment reports with last allocation and freeing stack of the physical page,
|
|
|
|
it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on.
|
|
|
|
|
2016-08-07 21:31:03 +00:00
|
|
|
To disable instrumentation for specific files or directories, add a line
|
|
|
|
similar to the following to the respective kernel Makefile:
|
|
|
|
|
|
|
|
- For a single file (e.g. main.o)::
|
|
|
|
|
|
|
|
KASAN_SANITIZE_main.o := n
|
|
|
|
|
|
|
|
- For all files in one directory::
|
|
|
|
|
|
|
|
KASAN_SANITIZE := n
|
|
|
|
|
|
|
|
Error reports
|
|
|
|
~~~~~~~~~~~~~
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
A typical out-of-bounds access generic KASAN report looks like this::
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
==================================================================
|
2018-12-28 08:31:10 +00:00
|
|
|
BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan]
|
|
|
|
Write of size 1 at addr ffff8801f44ec37b by task insmod/2760
|
|
|
|
|
|
|
|
CPU: 1 PID: 2760 Comm: insmod Not tainted 4.19.0-rc3+ #698
|
|
|
|
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
|
2016-08-07 21:31:03 +00:00
|
|
|
Call Trace:
|
2018-12-28 08:31:10 +00:00
|
|
|
dump_stack+0x94/0xd8
|
|
|
|
print_address_description+0x73/0x280
|
|
|
|
kasan_report+0x144/0x187
|
|
|
|
__asan_report_store1_noabort+0x17/0x20
|
|
|
|
kmalloc_oob_right+0xa8/0xbc [test_kasan]
|
|
|
|
kmalloc_tests_init+0x16/0x700 [test_kasan]
|
|
|
|
do_one_initcall+0xa5/0x3ae
|
|
|
|
do_init_module+0x1b6/0x547
|
|
|
|
load_module+0x75df/0x8070
|
|
|
|
__do_sys_init_module+0x1c6/0x200
|
|
|
|
__x64_sys_init_module+0x6e/0xb0
|
|
|
|
do_syscall_64+0x9f/0x2c0
|
|
|
|
entry_SYSCALL_64_after_hwframe+0x44/0xa9
|
|
|
|
RIP: 0033:0x7f96443109da
|
|
|
|
RSP: 002b:00007ffcf0b51b08 EFLAGS: 00000202 ORIG_RAX: 00000000000000af
|
|
|
|
RAX: ffffffffffffffda RBX: 000055dc3ee521a0 RCX: 00007f96443109da
|
|
|
|
RDX: 00007f96445cff88 RSI: 0000000000057a50 RDI: 00007f9644992000
|
|
|
|
RBP: 000055dc3ee510b0 R08: 0000000000000003 R09: 0000000000000000
|
|
|
|
R10: 00007f964430cd0a R11: 0000000000000202 R12: 00007f96445cff88
|
|
|
|
R13: 000055dc3ee51090 R14: 0000000000000000 R15: 0000000000000000
|
|
|
|
|
|
|
|
Allocated by task 2760:
|
|
|
|
save_stack+0x43/0xd0
|
|
|
|
kasan_kmalloc+0xa7/0xd0
|
|
|
|
kmem_cache_alloc_trace+0xe1/0x1b0
|
|
|
|
kmalloc_oob_right+0x56/0xbc [test_kasan]
|
|
|
|
kmalloc_tests_init+0x16/0x700 [test_kasan]
|
|
|
|
do_one_initcall+0xa5/0x3ae
|
|
|
|
do_init_module+0x1b6/0x547
|
|
|
|
load_module+0x75df/0x8070
|
|
|
|
__do_sys_init_module+0x1c6/0x200
|
|
|
|
__x64_sys_init_module+0x6e/0xb0
|
|
|
|
do_syscall_64+0x9f/0x2c0
|
|
|
|
entry_SYSCALL_64_after_hwframe+0x44/0xa9
|
|
|
|
|
|
|
|
Freed by task 815:
|
|
|
|
save_stack+0x43/0xd0
|
|
|
|
__kasan_slab_free+0x135/0x190
|
|
|
|
kasan_slab_free+0xe/0x10
|
|
|
|
kfree+0x93/0x1a0
|
|
|
|
umh_complete+0x6a/0xa0
|
|
|
|
call_usermodehelper_exec_async+0x4c3/0x640
|
|
|
|
ret_from_fork+0x35/0x40
|
|
|
|
|
|
|
|
The buggy address belongs to the object at ffff8801f44ec300
|
|
|
|
which belongs to the cache kmalloc-128 of size 128
|
|
|
|
The buggy address is located 123 bytes inside of
|
|
|
|
128-byte region [ffff8801f44ec300, ffff8801f44ec380)
|
|
|
|
The buggy address belongs to the page:
|
|
|
|
page:ffffea0007d13b00 count:1 mapcount:0 mapping:ffff8801f7001640 index:0x0
|
|
|
|
flags: 0x200000000000100(slab)
|
|
|
|
raw: 0200000000000100 ffffea0007d11dc0 0000001a0000001a ffff8801f7001640
|
|
|
|
raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000
|
|
|
|
page dumped because: kasan: bad access detected
|
|
|
|
|
2016-08-07 21:31:03 +00:00
|
|
|
Memory state around the buggy address:
|
2018-12-28 08:31:10 +00:00
|
|
|
ffff8801f44ec200: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
|
|
|
|
ffff8801f44ec280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
|
|
|
|
>ffff8801f44ec300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03
|
|
|
|
^
|
|
|
|
ffff8801f44ec380: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
|
|
|
|
ffff8801f44ec400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
|
2016-08-07 21:31:03 +00:00
|
|
|
==================================================================
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
The header of the report provides a short summary of what kind of bug happened
|
|
|
|
and what kind of access caused it. It's followed by a stack trace of the bad
|
|
|
|
access, a stack trace of where the accessed memory was allocated (in case bad
|
|
|
|
access happens on a slab object), and a stack trace of where the object was
|
|
|
|
freed (in case of a use-after-free bug report). Next comes a description of
|
|
|
|
the accessed slab object and information about the accessed memory page.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
In the last section the report shows memory state around the accessed address.
|
|
|
|
Reading this part requires some understanding of how KASAN works.
|
|
|
|
|
|
|
|
The state of each 8 aligned bytes of memory is encoded in one shadow byte.
|
|
|
|
Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
|
|
|
|
We use the following encoding for each shadow byte: 0 means that all 8 bytes
|
|
|
|
of the corresponding memory region are accessible; number N (1 <= N <= 7) means
|
|
|
|
that the first N bytes are accessible, and other (8 - N) bytes are not;
|
|
|
|
any negative value indicates that the entire 8-byte word is inaccessible.
|
|
|
|
We use different negative values to distinguish between different kinds of
|
|
|
|
inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
|
|
|
|
|
|
|
|
In the report above the arrows point to the shadow byte 03, which means that
|
|
|
|
the accessed address is partially accessible.
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
For tag-based KASAN this last report section shows the memory tags around the
|
|
|
|
accessed address (see Implementation details section).
|
|
|
|
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
Implementation details
|
|
|
|
----------------------
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Generic KASAN
|
|
|
|
~~~~~~~~~~~~~
|
|
|
|
|
2016-08-07 21:31:03 +00:00
|
|
|
From a high level, our approach to memory error detection is similar to that
|
|
|
|
of kmemcheck: use shadow memory to record whether each byte of memory is safe
|
2018-12-28 08:31:10 +00:00
|
|
|
to access, and use compile-time instrumentation to insert checks of shadow
|
|
|
|
memory on each memory access.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
|
|
|
|
to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
|
|
|
|
translate a memory address to its corresponding shadow address.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
Here is the function which translates an address to its corresponding shadow
|
|
|
|
address::
|
|
|
|
|
|
|
|
static inline void *kasan_mem_to_shadow(const void *addr)
|
|
|
|
{
|
|
|
|
return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
|
|
|
|
+ KASAN_SHADOW_OFFSET;
|
|
|
|
}
|
|
|
|
|
|
|
|
where ``KASAN_SHADOW_SCALE_SHIFT = 3``.
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Compile-time instrumentation is used to insert memory access checks. Compiler
|
|
|
|
inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each
|
|
|
|
memory access of size 1, 2, 4, 8 or 16. These functions check whether memory
|
|
|
|
access is valid or not by checking corresponding shadow memory.
|
2016-08-07 21:31:03 +00:00
|
|
|
|
|
|
|
GCC 5.0 has possibility to perform inline instrumentation. Instead of making
|
|
|
|
function calls GCC directly inserts the code to check the shadow memory.
|
|
|
|
This option significantly enlarges kernel but it gives x1.1-x2 performance
|
|
|
|
boost over outline instrumented kernel.
|
2018-12-28 08:31:10 +00:00
|
|
|
|
2020-08-07 06:24:46 +00:00
|
|
|
Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
|
|
|
|
and the second to last.
|
|
|
|
|
2018-12-28 08:31:10 +00:00
|
|
|
Software tag-based KASAN
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to
|
|
|
|
store a pointer tag in the top byte of kernel pointers. Like generic KASAN it
|
|
|
|
uses shadow memory to store memory tags associated with each 16-byte memory
|
|
|
|
cell (therefore it dedicates 1/16th of the kernel memory for shadow memory).
|
|
|
|
|
|
|
|
On each memory allocation tag-based KASAN generates a random tag, tags the
|
|
|
|
allocated memory with this tag, and embeds this tag into the returned pointer.
|
|
|
|
Software tag-based KASAN uses compile-time instrumentation to insert checks
|
|
|
|
before each memory access. These checks make sure that tag of the memory that
|
|
|
|
is being accessed is equal to tag of the pointer that is used to access this
|
|
|
|
memory. In case of a tag mismatch tag-based KASAN prints a bug report.
|
|
|
|
|
|
|
|
Software tag-based KASAN also has two instrumentation modes (outline, that
|
|
|
|
emits callbacks to check memory accesses; and inline, that performs the shadow
|
|
|
|
memory checks inline). With outline instrumentation mode, a bug report is
|
|
|
|
simply printed from the function that performs the access check. With inline
|
|
|
|
instrumentation a brk instruction is emitted by the compiler, and a dedicated
|
|
|
|
brk handler is used to print bug reports.
|
|
|
|
|
|
|
|
A potential expansion of this mode is a hardware tag-based mode, which would
|
|
|
|
use hardware memory tagging support instead of compiler instrumentation and
|
|
|
|
manual shadow memory manipulation.
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 01:54:50 +00:00
|
|
|
|
|
|
|
What memory accesses are sanitised by KASAN?
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
The kernel maps memory in a number of different parts of the address
|
|
|
|
space. This poses something of a problem for KASAN, which requires
|
|
|
|
that all addresses accessed by instrumented code have a valid shadow
|
|
|
|
region.
|
|
|
|
|
|
|
|
The range of kernel virtual addresses is large: there is not enough
|
|
|
|
real memory to support a real shadow region for every address that
|
|
|
|
could be accessed by the kernel.
|
|
|
|
|
|
|
|
By default
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
|
|
|
By default, architectures only map real memory over the shadow region
|
|
|
|
for the linear mapping (and potentially other small areas). For all
|
|
|
|
other areas - such as vmalloc and vmemmap space - a single read-only
|
|
|
|
page is mapped over the shadow area. This read-only shadow page
|
|
|
|
declares all memory accesses as permitted.
|
|
|
|
|
|
|
|
This presents a problem for modules: they do not live in the linear
|
|
|
|
mapping, but in a dedicated module space. By hooking in to the module
|
|
|
|
allocator, KASAN can temporarily map real shadow memory to cover
|
|
|
|
them. This allows detection of invalid accesses to module globals, for
|
|
|
|
example.
|
|
|
|
|
|
|
|
This also creates an incompatibility with ``VMAP_STACK``: if the stack
|
|
|
|
lives in vmalloc space, it will be shadowed by the read-only page, and
|
|
|
|
the kernel will fault when trying to set up the shadow data for stack
|
|
|
|
variables.
|
|
|
|
|
|
|
|
CONFIG_KASAN_VMALLOC
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
|
|
|
|
cost of greater memory usage. Currently this is only supported on x86.
|
|
|
|
|
|
|
|
This works by hooking into vmalloc and vmap, and dynamically
|
|
|
|
allocating real shadow memory to back the mappings.
|
|
|
|
|
|
|
|
Most mappings in vmalloc space are small, requiring less than a full
|
|
|
|
page of shadow space. Allocating a full shadow page per mapping would
|
|
|
|
therefore be wasteful. Furthermore, to ensure that different mappings
|
|
|
|
use different shadow pages, mappings would have to be aligned to
|
|
|
|
``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
|
|
|
|
|
|
|
|
Instead, we share backing space across multiple mappings. We allocate
|
|
|
|
a backing page when a mapping in vmalloc space uses a particular page
|
|
|
|
of the shadow region. This page can be shared by other vmalloc
|
|
|
|
mappings later on.
|
|
|
|
|
|
|
|
We hook in to the vmap infrastructure to lazily clean up unused shadow
|
|
|
|
memory.
|
|
|
|
|
|
|
|
To avoid the difficulties around swapping mappings around, we expect
|
|
|
|
that the part of the shadow region that covers the vmalloc space will
|
|
|
|
not be covered by the early shadow page, but will be left
|
|
|
|
unmapped. This will require changes in arch-specific code.
|
|
|
|
|
|
|
|
This allows ``VMAP_STACK`` support on x86, and can simplify support of
|
|
|
|
architectures that do not have a fixed module region.
|
2020-10-13 23:55:09 +00:00
|
|
|
|
|
|
|
CONFIG_KASAN_KUNIT_TEST & CONFIG_TEST_KASAN_MODULE
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
``CONFIG_KASAN_KUNIT_TEST`` utilizes the KUnit Test Framework for testing.
|
|
|
|
This means each test focuses on a small unit of functionality and
|
|
|
|
there are a few ways these tests can be run.
|
|
|
|
|
|
|
|
Each test will print the KASAN report if an error is detected and then
|
|
|
|
print the number of the test and the status of the test:
|
|
|
|
|
|
|
|
pass::
|
|
|
|
|
|
|
|
ok 28 - kmalloc_double_kzfree
|
2020-10-27 09:51:09 +00:00
|
|
|
|
2020-10-13 23:55:09 +00:00
|
|
|
or, if kmalloc failed::
|
|
|
|
|
|
|
|
# kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163
|
|
|
|
Expected ptr is not null, but is
|
|
|
|
not ok 4 - kmalloc_large_oob_right
|
2020-10-27 09:51:09 +00:00
|
|
|
|
2020-10-13 23:55:09 +00:00
|
|
|
or, if a KASAN report was expected, but not found::
|
|
|
|
|
|
|
|
# kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
|
|
|
|
Expected kasan_data->report_expected == kasan_data->report_found, but
|
|
|
|
kasan_data->report_expected == 1
|
|
|
|
kasan_data->report_found == 0
|
|
|
|
not ok 28 - kmalloc_double_kzfree
|
|
|
|
|
|
|
|
All test statuses are tracked as they run and an overall status will
|
|
|
|
be printed at the end::
|
|
|
|
|
|
|
|
ok 1 - kasan
|
|
|
|
|
|
|
|
or::
|
|
|
|
|
|
|
|
not ok 1 - kasan
|
|
|
|
|
|
|
|
(1) Loadable Module
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
With ``CONFIG_KUNIT`` enabled, ``CONFIG_KASAN_KUNIT_TEST`` can be built as
|
|
|
|
a loadable module and run on any architecture that supports KASAN
|
|
|
|
using something like insmod or modprobe. The module is called ``test_kasan``.
|
|
|
|
|
|
|
|
(2) Built-In
|
|
|
|
~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
With ``CONFIG_KUNIT`` built-in, ``CONFIG_KASAN_KUNIT_TEST`` can be built-in
|
|
|
|
on any architecure that supports KASAN. These and any other KUnit
|
|
|
|
tests enabled will run and print the results at boot as a late-init
|
|
|
|
call.
|
|
|
|
|
|
|
|
(3) Using kunit_tool
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
With ``CONFIG_KUNIT`` and ``CONFIG_KASAN_KUNIT_TEST`` built-in, we can also
|
|
|
|
use kunit_tool to see the results of these along with other KUnit
|
|
|
|
tests in a more readable way. This will not print the KASAN reports
|
|
|
|
of tests that passed. Use `KUnit documentation <https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html>`_ for more up-to-date
|
|
|
|
information on kunit_tool.
|
|
|
|
|
|
|
|
.. _KUnit: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html
|
|
|
|
|
|
|
|
``CONFIG_TEST_KASAN_MODULE`` is a set of KASAN tests that could not be
|
|
|
|
converted to KUnit. These tests can be run only as a module with
|
|
|
|
``CONFIG_TEST_KASAN_MODULE`` built as a loadable module and
|
|
|
|
``CONFIG_KASAN`` built-in. The type of error expected and the
|
|
|
|
function being run is printed before the expression expected to give
|
|
|
|
an error. Then the error is printed, if found, and that test
|
|
|
|
should be interpretted to pass only if the error was the one expected
|
|
|
|
by the test.
|