linux/arch/powerpc/mm/book3s32
Christophe Leroy 1b03e71ff6 powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAP
On hash 32 bits, handling minor protection faults like unsetting
dirty flag is heavy if done from the normal page_fault processing,
because it implies hash table software lookup for flushing the entry
and then a DSI is taken anyway to add the entry back.

When KUAP was implemented, as explained in commit a68c31fc01
("powerpc/32s: Implement Kernel Userspace Access Protection"),
protection faults has been diverted from hash_page() because
hash_page() was not able to identify a KUAP fault.

Implement KUAP verification in hash_page(), by clearing write
permission when the access is a kernel access and Ks is 1.
This works regardless of the address because kernel segments always
have Ks set to 0 while user segments have Ks set to 0 only
when kernel write to userspace is granted.

Then protection faults can be handled by hash_page() even for KUAP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8a4ffe4798e9ea32aaaccdf85e411bb1beed3500.1605542955.git.christophe.leroy@csgroup.eu
2020-12-09 16:59:46 +11:00
..
hash_low.S powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAP 2020-12-09 16:59:46 +11:00
Makefile powerpc/32s: Move _tlbie() and _tlbia() in a new file 2020-12-09 16:46:55 +11:00
mmu_context.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152 2019-05-30 11:26:32 -07:00
mmu.c powerpc/32s: Move early_mmu_init() into mmu.c 2020-12-09 16:46:56 +11:00
nohash_low.S powerpc/32s: Move _tlbie() and _tlbia() in a new file 2020-12-09 16:46:55 +11:00
tlb.c powerpc/32s: Move early_mmu_init() into mmu.c 2020-12-09 16:46:56 +11:00