forked from Minki/linux
e5b8d92189
The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
47 lines
1.2 KiB
C
47 lines
1.2 KiB
C
// SPDX-License-Identifier: GPL-2.0-only
|
|
/*
|
|
* Based on arch/arm/mm/copypage.c
|
|
*
|
|
* Copyright (C) 2002 Deep Blue Solutions Ltd, All Rights Reserved.
|
|
* Copyright (C) 2012 ARM Ltd.
|
|
*/
|
|
|
|
#include <linux/bitops.h>
|
|
#include <linux/mm.h>
|
|
|
|
#include <asm/page.h>
|
|
#include <asm/cacheflush.h>
|
|
#include <asm/cpufeature.h>
|
|
#include <asm/mte.h>
|
|
|
|
void copy_highpage(struct page *to, struct page *from)
|
|
{
|
|
struct page *kto = page_address(to);
|
|
struct page *kfrom = page_address(from);
|
|
|
|
copy_page(kto, kfrom);
|
|
|
|
if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
|
|
set_bit(PG_mte_tagged, &to->flags);
|
|
page_kasan_tag_reset(to);
|
|
/*
|
|
* We need smp_wmb() in between setting the flags and clearing the
|
|
* tags because if another thread reads page->flags and builds a
|
|
* tagged address out of it, there is an actual dependency to the
|
|
* memory access, but on the current thread we do not guarantee that
|
|
* the new page->flags are visible before the tags were updated.
|
|
*/
|
|
smp_wmb();
|
|
mte_copy_page_tags(kto, kfrom);
|
|
}
|
|
}
|
|
EXPORT_SYMBOL(copy_highpage);
|
|
|
|
void copy_user_highpage(struct page *to, struct page *from,
|
|
unsigned long vaddr, struct vm_area_struct *vma)
|
|
{
|
|
copy_highpage(to, from);
|
|
flush_dcache_page(to);
|
|
}
|
|
EXPORT_SYMBOL_GPL(copy_user_highpage);
|