linux/arch/arm64/mm
Mark Rutland 07b742a4d9 arm64: mm: log potential KASAN shadow alias
When the kernel is built with KASAN_GENERIC or KASAN_SW_TAGS, shadow
memory is allocated and mapped for all legitimate kernel addresses, and
prior to a regular memory access instrumentation will read from the
corresponding shadow address.

Due to the way memory addresses are converted to shadow addresses, bogus
pointers (e.g. NULL) can generate shadow addresses out of the bounds of
allocated shadow memory. For example, with KASAN_GENERIC and 48-bit VAs,
NULL would have a shadow address of dfff800000000000, which falls
between the TTBR ranges.

To make such cases easier to debug, this patch makes die_kernel_fault()
dump the real memory address range for any potential KASAN shadow access
using kasan_non_canonical_hook(), which results in fault information as
below when KASAN is enabled:

| Unable to handle kernel paging request at virtual address dfff800000000017
| KASAN: null-ptr-deref in range [0x00000000000000b8-0x00000000000000bf]
| Mem abort info:
|   ESR = 0x96000004
|   EC = 0x25: DABT (current EL), IL = 32 bits
|   SET = 0, FnV = 0
|   EA = 0, S1PTW = 0
|   FSC = 0x04: level 0 translation fault
| Data abort info:
|   ISV = 0, ISS = 0x00000004
|   CM = 0, WnR = 0
| [dfff800000000017] address between user and kernel address ranges

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Will Deacon <will@kernel.org>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211207183226.834557-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-12-13 18:47:09 +00:00
..
cache.S arm64: Rename arm64-internal cache maintenance functions 2021-05-25 19:27:49 +01:00
context.c arm64: mm: Use better bitmap_zalloc() 2021-06-01 18:52:05 +01:00
copypage.c arm64: mte: reset the page tag in page->flags 2020-12-22 12:55:07 -08:00
dma-mapping.c iommu/dma: Pass address limit rather than size to iommu_setup_dma_ops() 2021-06-25 15:02:43 +02:00
extable.c arm64: extable: add load_unaligned_zeropad() handler 2021-10-21 10:45:22 +01:00
fault.c arm64: mm: log potential KASAN shadow alias 2021-12-13 18:47:09 +00:00
flush.c arm64: Rename arm64-internal cache maintenance functions 2021-05-25 19:27:49 +01:00
hugetlbpage.c Merge branch 'for-next/fixes' into for-next/core 2021-10-29 12:27:53 +01:00
init.c Merge branch 'for-next/pfn-valid' into for-next/core 2021-10-29 12:25:19 +01:00
ioremap.c arm64: decouple check whether pfn is in linear map from pfn_valid() 2021-06-30 20:47:29 -07:00
kasan_init.c kasan: add kasan mode messages when kasan init 2021-11-11 09:34:35 -08:00
Makefile arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors 2021-10-01 13:30:59 +01:00
mmap.c arm64: Include linux/io.h in mm/mmap.c 2021-01-27 12:52:16 +00:00
mmu.c arm64 fixes for -rc1 2021-11-10 11:29:30 -08:00
mteswap.c arm64: mte: reset the page tag in page->flags 2020-12-22 12:55:07 -08:00
pageattr.c set_memory: allow querying whether set_direct_map_*() is actually enabled 2021-07-08 11:48:20 -07:00
pgd.c mm: consolidate pgtable_cache_init() and pgd_cache_init() 2019-09-24 15:54:09 -07:00
physaddr.c arm64: Do not pass tagged addresses to __is_lm_address() 2021-02-02 17:44:47 +00:00
proc.S arm64: kasan: mte: use a constant kernel GCR_EL1 value 2021-08-02 18:14:21 +01:00
ptdump_debugfs.c arm64: Add __init section marker to some functions 2021-04-08 17:45:10 +01:00
ptdump.c arm64: mm: Remove unused support for Device-GRE memory type 2021-06-01 18:53:53 +01:00
trans_pgd-asm.S arm64: kexec: configure EL2 vectors for kexec 2021-10-01 13:31:00 +01:00
trans_pgd.c arm64: trans_pgd: remove trans_pgd_map_page() 2021-10-01 13:31:01 +01:00