arm64 updates for 5.14

- Optimise SVE switching for CPUs with 128-bit implementations.
 
  - Fix output format from SVE selftest.
 
  - Add support for versions v1.2 and 1.3 of the SMC calling convention.
 
  - Allow Pointer Authentication to be configured independently for
    kernel and userspace.
 
  - PMU driver cleanups for managing IRQ affinity and exposing event
    attributes via sysfs.
 
  - KASAN optimisations for both hardware tagging (MTE) and out-of-line
    software tagging implementations.
 
  - Relax frame record alignment requirements to facilitate 8-byte
    alignment with KASAN and Clang.
 
  - Cleanup of page-table definitions and removal of unused memory types.
 
  - Reduction of ARCH_DMA_MINALIGN back to 64 bytes.
 
  - Refactoring of our instruction decoding routines and addition of some
    missing encodings.
 
  - Move entry code moved into C and hardened against harmful compiler
    instrumentation.
 
  - Update booting requirements for the FEAT_HCX feature, added to v8.7
    of the architecture.
 
  - Fix resume from idle when pNMI is being used.
 
  - Additional CPU sanity checks for MTE and preparatory changes for
    systems where not all of the CPUs support 32-bit EL0.
 
  - Update our kernel string routines to the latest Cortex Strings
    implementation.
 
  - Big cleanup of our cache maintenance routines, which were confusingly
    named and inconsistent in their implementations.
 
  - Tweak linker flags so that GDB can understand vmlinux when using RELR
    relocations.
 
  - Boot path cleanups to enable early initialisation of per-cpu
    operations needed by KCSAN.
 
  - Non-critical fixes and miscellaneous cleanup.
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAmDUh1YQHHdpbGxAa2Vy
 bmVsLm9yZwAKCRC3rHDchMFjNDaUCAC+2Jy2Yopd94uBPYajGybM0rqCUgE7b5n1
 A7UzmQ6fia2hwqCPmxGG+sRabovwN7C1bKrUCc03RIbErIa7wum1edeyqmF/Aw44
 DUDY1MAOSZaFmX8L62QCvxG1hfdLPtGmHMd1hdXvxYK7PCaigEFnzbLRWTtgE+Ok
 JhdvNfsoeITJObHnvYPF3rV3NAbyYni9aNJ5AC/qb3dlf6XigEraXaMj29XHKfwc
 +vmn+25oqFkLHyFeguqIoK+vUQAy/8TjFfjX83eN3LZknNhDJgWS1Iq1Nm+Vxt62
 RvDUUecWJjAooCWgmil6pt0enI+q6E8LcX3A3cWWrM6psbxnYzkU
 =I6KS
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "There's a reasonable amount here and the juicy details are all below.

  It's worth noting that the MTE/KASAN changes strayed outside of our
  usual directories due to core mm changes and some associated changes
  to some other architectures; Andrew asked for us to carry these [1]
  rather that take them via the -mm tree.

  Summary:

   - Optimise SVE switching for CPUs with 128-bit implementations.

   - Fix output format from SVE selftest.

   - Add support for versions v1.2 and 1.3 of the SMC calling
     convention.

   - Allow Pointer Authentication to be configured independently for
     kernel and userspace.

   - PMU driver cleanups for managing IRQ affinity and exposing event
     attributes via sysfs.

   - KASAN optimisations for both hardware tagging (MTE) and out-of-line
     software tagging implementations.

   - Relax frame record alignment requirements to facilitate 8-byte
     alignment with KASAN and Clang.

   - Cleanup of page-table definitions and removal of unused memory
     types.

   - Reduction of ARCH_DMA_MINALIGN back to 64 bytes.

   - Refactoring of our instruction decoding routines and addition of
     some missing encodings.

   - Move entry code moved into C and hardened against harmful compiler
     instrumentation.

   - Update booting requirements for the FEAT_HCX feature, added to v8.7
     of the architecture.

   - Fix resume from idle when pNMI is being used.

   - Additional CPU sanity checks for MTE and preparatory changes for
     systems where not all of the CPUs support 32-bit EL0.

   - Update our kernel string routines to the latest Cortex Strings
     implementation.

   - Big cleanup of our cache maintenance routines, which were
     confusingly named and inconsistent in their implementations.

   - Tweak linker flags so that GDB can understand vmlinux when using
     RELR relocations.

   - Boot path cleanups to enable early initialisation of per-cpu
     operations needed by KCSAN.

   - Non-critical fixes and miscellaneous cleanup"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (150 commits)
  arm64: tlb: fix the TTL value of tlb_get_level
  arm64: Restrict undef hook for cpufeature registers
  arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS
  arm64: insn: avoid circular include dependency
  arm64: smp: Bump debugging information print down to KERN_DEBUG
  drivers/perf: fix the missed ida_simple_remove() in ddr_perf_probe()
  perf/arm-cmn: Fix invalid pointer when access dtc object sharing the same IRQ number
  arm64: suspend: Use cpuidle context helpers in cpu_suspend()
  PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter()
  arm64: Convert cpu_do_idle() to using cpuidle context helpers
  arm64: Add cpuidle context save/restore helpers
  arm64: head: fix code comments in set_cpu_boot_mode_flag
  arm64: mm: drop unused __pa(__idmap_text_start)
  arm64: mm: fix the count comments in compute_indices
  arm64/mm: Fix ttbr0 values stored in struct thread_info for software-pan
  arm64: mm: Pass original fault address to handle_mm_fault()
  arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK]
  arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT
  arm64/mm: Drop SWAPPER_INIT_MAP_SIZE
  arm64: Conditionally configure PTR_AUTH key of the kernel.
  ...
This commit is contained in:
Linus Torvalds 2021-06-28 14:04:24 -07:00
commit 9840cfcb97
158 changed files with 3390 additions and 2583 deletions

View File

@ -277,6 +277,12 @@ Before jumping into the kernel, the following conditions must be met:
- SCR_EL3.FGTEn (bit 27) must be initialised to 0b1. - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
- If EL3 is present and the kernel is entered at EL2:
- SCR_EL3.HXEn (bit 38) must be initialised to 0b1.
For CPUs with Advanced SIMD and floating point support: For CPUs with Advanced SIMD and floating point support:
- If EL3 is present: - If EL3 is present:

View File

@ -1039,7 +1039,7 @@ LDFLAGS_vmlinux += $(call ld-option, -X,)
endif endif
ifeq ($(CONFIG_RELR),y) ifeq ($(CONFIG_RELR),y)
LDFLAGS_vmlinux += --pack-dyn-relocs=relr LDFLAGS_vmlinux += --pack-dyn-relocs=relr --use-android-relr-tags
endif endif
# We never want expected sections to be placed heuristically by the # We never want expected sections to be placed heuristically by the

View File

@ -17,9 +17,9 @@
extern void clear_page(void *page); extern void clear_page(void *page);
#define clear_user_page(page, vaddr, pg) clear_page(page) #define clear_user_page(page, vaddr, pg) clear_page(page)
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ #define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr) alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
extern void copy_page(void * _to, void * _from); extern void copy_page(void * _to, void * _from);
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@ -50,4 +50,9 @@ extern int arm_cpuidle_suspend(int index);
extern int arm_cpuidle_init(int cpu); extern int arm_cpuidle_init(int cpu);
struct arm_cpuidle_irq_context { };
#define arm_cpuidle_save_irq_context(c) (void)c
#define arm_cpuidle_restore_irq_context(c) (void)c
#endif #endif

View File

@ -773,10 +773,10 @@ static inline void armv7pmu_write_counter(struct perf_event *event, u64 value)
pr_err("CPU%u writing wrong counter %d\n", pr_err("CPU%u writing wrong counter %d\n",
smp_processor_id(), idx); smp_processor_id(), idx);
} else if (idx == ARMV7_IDX_CYCLE_COUNTER) { } else if (idx == ARMV7_IDX_CYCLE_COUNTER) {
asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value)); asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" ((u32)value));
} else { } else {
armv7_pmnc_select_counter(idx); armv7_pmnc_select_counter(idx);
asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" (value)); asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" ((u32)value));
} }
} }

View File

@ -1481,12 +1481,6 @@ menu "ARMv8.3 architectural features"
config ARM64_PTR_AUTH config ARM64_PTR_AUTH
bool "Enable support for pointer authentication" bool "Enable support for pointer authentication"
default y default y
depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
# Modern compilers insert a .note.gnu.property section note for PAC
# which is only understood by binutils starting with version 2.33.1.
depends on LD_IS_LLD || LD_VERSION >= 23301 || (CC_IS_GCC && GCC_VERSION < 90100)
depends on !CC_IS_CLANG || AS_HAS_CFI_NEGATE_RA_STATE
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help help
Pointer authentication (part of the ARMv8.3 Extensions) provides Pointer authentication (part of the ARMv8.3 Extensions) provides
instructions for signing and authenticating pointers against secret instructions for signing and authenticating pointers against secret
@ -1498,13 +1492,6 @@ config ARM64_PTR_AUTH
for each process at exec() time, with these keys being for each process at exec() time, with these keys being
context-switched along with the process. context-switched along with the process.
If the compiler supports the -mbranch-protection or
-msign-return-address flag (e.g. GCC 7 or later), then this option
will also cause the kernel itself to be compiled with return address
protection. In this case, and if the target hardware is known to
support pointer authentication, then CONFIG_STACKPROTECTOR can be
disabled with minimal loss of protection.
The feature is detected at runtime. If the feature is not present in The feature is detected at runtime. If the feature is not present in
hardware it will not be advertised to userspace/KVM guest nor will it hardware it will not be advertised to userspace/KVM guest nor will it
be enabled. be enabled.
@ -1515,6 +1502,24 @@ config ARM64_PTR_AUTH
but with the feature disabled. On such a system, this option should but with the feature disabled. On such a system, this option should
not be selected. not be selected.
config ARM64_PTR_AUTH_KERNEL
bool "Use pointer authentication for kernel"
default y
depends on ARM64_PTR_AUTH
depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
# Modern compilers insert a .note.gnu.property section note for PAC
# which is only understood by binutils starting with version 2.33.1.
depends on LD_IS_LLD || LD_VERSION >= 23301 || (CC_IS_GCC && GCC_VERSION < 90100)
depends on !CC_IS_CLANG || AS_HAS_CFI_NEGATE_RA_STATE
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help
If the compiler supports the -mbranch-protection or
-msign-return-address flag (e.g. GCC 7 or later), then this option
will cause the kernel itself to be compiled with return address
protection. In this case, and if the target hardware is known to
support pointer authentication, then CONFIG_STACKPROTECTOR can be
disabled with minimal loss of protection.
This feature works with FUNCTION_GRAPH_TRACER option only if This feature works with FUNCTION_GRAPH_TRACER option only if
DYNAMIC_FTRACE_WITH_REGS is enabled. DYNAMIC_FTRACE_WITH_REGS is enabled.
@ -1606,7 +1611,7 @@ config ARM64_BTI_KERNEL
bool "Use Branch Target Identification for kernel" bool "Use Branch Target Identification for kernel"
default y default y
depends on ARM64_BTI depends on ARM64_BTI
depends on ARM64_PTR_AUTH depends on ARM64_PTR_AUTH_KERNEL
depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
depends on !CC_IS_GCC || GCC_VERSION >= 100100 depends on !CC_IS_GCC || GCC_VERSION >= 100100

View File

@ -70,7 +70,7 @@ endif
# off, this will be overridden if we are using branch protection. # off, this will be overridden if we are using branch protection.
branch-prot-flags-y += $(call cc-option,-mbranch-protection=none) branch-prot-flags-y += $(call cc-option,-mbranch-protection=none)
ifeq ($(CONFIG_ARM64_PTR_AUTH),y) ifeq ($(CONFIG_ARM64_PTR_AUTH_KERNEL),y)
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
# We enable additional protection for leaf functions as there is some # We enable additional protection for leaf functions as there is some
# narrow potential for ROP protection benefits and no substantial # narrow potential for ROP protection benefits and no substantial

View File

@ -3,12 +3,10 @@
#define __ASM_ALTERNATIVE_MACROS_H #define __ASM_ALTERNATIVE_MACROS_H
#include <asm/cpucaps.h> #include <asm/cpucaps.h>
#include <asm/insn-def.h>
#define ARM64_CB_PATCH ARM64_NCAPS #define ARM64_CB_PATCH ARM64_NCAPS
/* A64 instructions are always 32 bits. */
#define AARCH64_INSN_SIZE 4
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/stringify.h> #include <linux/stringify.h>
@ -197,11 +195,6 @@ alternative_endif
#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \
alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
.macro user_alt, label, oldinstr, newinstr, cond
9999: alternative_insn "\oldinstr", "\newinstr", \cond
_asm_extable 9999b, \label
.endm
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
/* /*

View File

@ -124,7 +124,8 @@ static inline u32 gic_read_rpr(void)
#define gic_read_lpir(c) readq_relaxed(c) #define gic_read_lpir(c) readq_relaxed(c)
#define gic_write_lpir(v, c) writeq_relaxed(v, c) #define gic_write_lpir(v, c) writeq_relaxed(v, c)
#define gic_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) #define gic_flush_dcache_to_poc(a,l) \
dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
#define gits_read_baser(c) readq_relaxed(c) #define gits_read_baser(c) readq_relaxed(c)
#define gits_write_baser(v, c) writeq_relaxed(v, c) #define gits_write_baser(v, c) writeq_relaxed(v, c)

View File

@ -23,4 +23,10 @@ long long __ashlti3(long long a, int b);
long long __ashrti3(long long a, int b); long long __ashrti3(long long a, int b);
long long __lshrti3(long long a, int b); long long __lshrti3(long long a, int b);
/*
* This function uses a custom calling convention and cannot be called from C so
* this prototype is not entirely accurate.
*/
void __hwasan_tag_mismatch(unsigned long addr, unsigned long access_info);
#endif /* __ASM_PROTOTYPES_H */ #endif /* __ASM_PROTOTYPES_H */

View File

@ -7,19 +7,7 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
/*
* thread.keys_user.ap* as offset exceeds the #imm offset range
* so use the base value of ldp as thread.keys_user and offset as
* thread.keys_user.ap*.
*/
.macro __ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
mov \tmp1, #THREAD_KEYS_USER
add \tmp1, \tsk, \tmp1
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3
.endm
.macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3 .macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
mov \tmp1, #THREAD_KEYS_KERNEL mov \tmp1, #THREAD_KEYS_KERNEL
@ -42,6 +30,33 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
alternative_else_nop_endif alternative_else_nop_endif
.endm .endm
#else /* CONFIG_ARM64_PTR_AUTH_KERNEL */
.macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
.endm
#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
#ifdef CONFIG_ARM64_PTR_AUTH
/*
* thread.keys_user.ap* as offset exceeds the #imm offset range
* so use the base value of ldp as thread.keys_user and offset as
* thread.keys_user.ap*.
*/
.macro __ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
mov \tmp1, #THREAD_KEYS_USER
add \tmp1, \tsk, \tmp1
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3
.endm
.macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3 .macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
mrs \tmp1, id_aa64isar1_el1 mrs \tmp1, id_aa64isar1_el1
ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8 ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8
@ -64,17 +79,11 @@ alternative_else_nop_endif
.Lno_addr_auth\@: .Lno_addr_auth\@:
.endm .endm
#else /* CONFIG_ARM64_PTR_AUTH */ #else /* !CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
.endm .endm
.macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
.endm
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_ASM_POINTER_AUTH_H */ #endif /* __ASM_ASM_POINTER_AUTH_H */

View File

@ -130,15 +130,27 @@ alternative_endif
.endm .endm
/* /*
* Emit an entry into the exception table * Create an exception table entry for `insn`, which will branch to `fixup`
* when an unhandled fault is taken.
*/ */
.macro _asm_extable, from, to .macro _asm_extable, insn, fixup
.pushsection __ex_table, "a" .pushsection __ex_table, "a"
.align 3 .align 3
.long (\from - .), (\to - .) .long (\insn - .), (\fixup - .)
.popsection .popsection
.endm .endm
/*
* Create an exception table entry for `insn` if `fixup` is provided. Otherwise
* do nothing.
*/
.macro _cond_extable, insn, fixup
.ifnc \fixup,
_asm_extable \insn, \fixup
.endif
.endm
#define USER(l, x...) \ #define USER(l, x...) \
9999: x; \ 9999: x; \
_asm_extable 9999b, l _asm_extable 9999b, l
@ -232,15 +244,23 @@ lr .req x30 // link register
* @dst: destination register * @dst: destination register
*/ */
#if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__) #if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__)
.macro this_cpu_offset, dst .macro get_this_cpu_offset, dst
mrs \dst, tpidr_el2 mrs \dst, tpidr_el2
.endm .endm
#else #else
.macro this_cpu_offset, dst .macro get_this_cpu_offset, dst
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
mrs \dst, tpidr_el1 mrs \dst, tpidr_el1
alternative_else alternative_else
mrs \dst, tpidr_el2 mrs \dst, tpidr_el2
alternative_endif
.endm
.macro set_this_cpu_offset, src
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
msr tpidr_el1, \src
alternative_else
msr tpidr_el2, \src
alternative_endif alternative_endif
.endm .endm
#endif #endif
@ -253,7 +273,7 @@ alternative_endif
.macro adr_this_cpu, dst, sym, tmp .macro adr_this_cpu, dst, sym, tmp
adrp \tmp, \sym adrp \tmp, \sym
add \dst, \tmp, #:lo12:\sym add \dst, \tmp, #:lo12:\sym
this_cpu_offset \tmp get_this_cpu_offset \tmp
add \dst, \dst, \tmp add \dst, \dst, \tmp
.endm .endm
@ -264,7 +284,7 @@ alternative_endif
*/ */
.macro ldr_this_cpu dst, sym, tmp .macro ldr_this_cpu dst, sym, tmp
adr_l \dst, \sym adr_l \dst, \sym
this_cpu_offset \tmp get_this_cpu_offset \tmp
ldr \dst, [\dst, \tmp] ldr \dst, [\dst, \tmp]
.endm .endm
@ -375,51 +395,53 @@ alternative_cb_end
bfi \tcr, \tmp0, \pos, #3 bfi \tcr, \tmp0, \pos, #3
.endm .endm
/* .macro __dcache_op_workaround_clean_cache, op, addr
* Macro to perform a data cache maintenance for the interval
* [kaddr, kaddr + size)
*
* op: operation passed to dc instruction
* domain: domain used in dsb instruciton
* kaddr: starting virtual address of the region
* size: size of the region
* Corrupts: kaddr, size, tmp1, tmp2
*/
.macro __dcache_op_workaround_clean_cache, op, kaddr
alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
dc \op, \kaddr dc \op, \addr
alternative_else alternative_else
dc civac, \kaddr dc civac, \addr
alternative_endif alternative_endif
.endm .endm
.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2 /*
* Macro to perform a data cache maintenance for the interval
* [start, end)
*
* op: operation passed to dc instruction
* domain: domain used in dsb instruciton
* start: starting virtual address of the region
* end: end virtual address of the region
* fixup: optional label to branch to on user fault
* Corrupts: start, end, tmp1, tmp2
*/
.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2, fixup
dcache_line_size \tmp1, \tmp2 dcache_line_size \tmp1, \tmp2
add \size, \kaddr, \size
sub \tmp2, \tmp1, #1 sub \tmp2, \tmp1, #1
bic \kaddr, \kaddr, \tmp2 bic \start, \start, \tmp2
9998: .Ldcache_op\@:
.ifc \op, cvau .ifc \op, cvau
__dcache_op_workaround_clean_cache \op, \kaddr __dcache_op_workaround_clean_cache \op, \start
.else .else
.ifc \op, cvac .ifc \op, cvac
__dcache_op_workaround_clean_cache \op, \kaddr __dcache_op_workaround_clean_cache \op, \start
.else .else
.ifc \op, cvap .ifc \op, cvap
sys 3, c7, c12, 1, \kaddr // dc cvap sys 3, c7, c12, 1, \start // dc cvap
.else .else
.ifc \op, cvadp .ifc \op, cvadp
sys 3, c7, c13, 1, \kaddr // dc cvadp sys 3, c7, c13, 1, \start // dc cvadp
.else .else
dc \op, \kaddr dc \op, \start
.endif .endif
.endif .endif
.endif .endif
.endif .endif
add \kaddr, \kaddr, \tmp1 add \start, \start, \tmp1
cmp \kaddr, \size cmp \start, \end
b.lo 9998b b.lo .Ldcache_op\@
dsb \domain dsb \domain
_cond_extable .Ldcache_op\@, \fixup
.endm .endm
/* /*
@ -427,20 +449,22 @@ alternative_endif
* [start, end) * [start, end)
* *
* start, end: virtual addresses describing the region * start, end: virtual addresses describing the region
* label: A label to branch to on user fault. * fixup: optional label to branch to on user fault
* Corrupts: tmp1, tmp2 * Corrupts: tmp1, tmp2
*/ */
.macro invalidate_icache_by_line start, end, tmp1, tmp2, label .macro invalidate_icache_by_line start, end, tmp1, tmp2, fixup
icache_line_size \tmp1, \tmp2 icache_line_size \tmp1, \tmp2
sub \tmp2, \tmp1, #1 sub \tmp2, \tmp1, #1
bic \tmp2, \start, \tmp2 bic \tmp2, \start, \tmp2
9997: .Licache_op\@:
USER(\label, ic ivau, \tmp2) // invalidate I line PoU ic ivau, \tmp2 // invalidate I line PoU
add \tmp2, \tmp2, \tmp1 add \tmp2, \tmp2, \tmp1
cmp \tmp2, \end cmp \tmp2, \end
b.lo 9997b b.lo .Licache_op\@
dsb ish dsb ish
isb isb
_cond_extable .Licache_op\@, \fixup
.endm .endm
/* /*
@ -745,7 +769,7 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
cbz \tmp, \lbl cbz \tmp, \lbl
#endif #endif
adr_l \tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING adr_l \tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
this_cpu_offset \tmp2 get_this_cpu_offset \tmp2
ldr w\tmp, [\tmp, \tmp2] ldr w\tmp, [\tmp, \tmp2]
cbnz w\tmp, \lbl // yield on pending softirq in task context cbnz w\tmp, \lbl // yield on pending softirq in task context
.Lnoyield_\@: .Lnoyield_\@:

View File

@ -47,7 +47,7 @@
* cache before the transfer is done, causing old data to be seen by * cache before the transfer is done, causing old data to be seen by
* the CPU. * the CPU.
*/ */
#define ARCH_DMA_MINALIGN (128) #define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) #define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)

View File

@ -30,45 +30,58 @@
* the implementation assumes non-aliasing VIPT D-cache and (aliasing) * the implementation assumes non-aliasing VIPT D-cache and (aliasing)
* VIPT I-cache. * VIPT I-cache.
* *
* flush_icache_range(start, end) * All functions below apply to the interval [start, end)
* - start - virtual start address (inclusive)
* - end - virtual end address (exclusive)
* *
* Ensure coherency between the I-cache and the D-cache in the * caches_clean_inval_pou(start, end)
* region described by start, end.
* - start - virtual start address
* - end - virtual end address
* *
* invalidate_icache_range(start, end) * Ensure coherency between the I-cache and the D-cache region to
* the Point of Unification.
* *
* Invalidate the I-cache in the region described by start, end. * caches_clean_inval_user_pou(start, end)
* - start - virtual start address
* - end - virtual end address
* *
* __flush_cache_user_range(start, end) * Ensure coherency between the I-cache and the D-cache region to
* the Point of Unification.
* Use only if the region might access user memory.
* *
* Ensure coherency between the I-cache and the D-cache in the * icache_inval_pou(start, end)
* region described by start, end.
* - start - virtual start address
* - end - virtual end address
* *
* __flush_dcache_area(kaddr, size) * Invalidate I-cache region to the Point of Unification.
* *
* Ensure that the data held in page is written back. * dcache_clean_inval_poc(start, end)
* - kaddr - page address *
* - size - region size * Clean and invalidate D-cache region to the Point of Coherency.
*
* dcache_inval_poc(start, end)
*
* Invalidate D-cache region to the Point of Coherency.
*
* dcache_clean_poc(start, end)
*
* Clean D-cache region to the Point of Coherency.
*
* dcache_clean_pop(start, end)
*
* Clean D-cache region to the Point of Persistence.
*
* dcache_clean_pou(start, end)
*
* Clean D-cache region to the Point of Unification.
*/ */
extern void __flush_icache_range(unsigned long start, unsigned long end); extern void caches_clean_inval_pou(unsigned long start, unsigned long end);
extern int invalidate_icache_range(unsigned long start, unsigned long end); extern void icache_inval_pou(unsigned long start, unsigned long end);
extern void __flush_dcache_area(void *addr, size_t len); extern void dcache_clean_inval_poc(unsigned long start, unsigned long end);
extern void __inval_dcache_area(void *addr, size_t len); extern void dcache_inval_poc(unsigned long start, unsigned long end);
extern void __clean_dcache_area_poc(void *addr, size_t len); extern void dcache_clean_poc(unsigned long start, unsigned long end);
extern void __clean_dcache_area_pop(void *addr, size_t len); extern void dcache_clean_pop(unsigned long start, unsigned long end);
extern void __clean_dcache_area_pou(void *addr, size_t len); extern void dcache_clean_pou(unsigned long start, unsigned long end);
extern long __flush_cache_user_range(unsigned long start, unsigned long end); extern long caches_clean_inval_user_pou(unsigned long start, unsigned long end);
extern void sync_icache_aliases(void *kaddr, unsigned long len); extern void sync_icache_aliases(unsigned long start, unsigned long end);
static inline void flush_icache_range(unsigned long start, unsigned long end) static inline void flush_icache_range(unsigned long start, unsigned long end)
{ {
__flush_icache_range(start, end); caches_clean_inval_pou(start, end);
/* /*
* IPI all online CPUs so that they undergo a context synchronization * IPI all online CPUs so that they undergo a context synchronization
@ -122,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *); extern void flush_dcache_page(struct page *);
static __always_inline void __flush_icache_all(void) static __always_inline void icache_inval_all_pou(void)
{ {
if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
return; return;

View File

@ -12,26 +12,7 @@
/* /*
* Records attributes of an individual CPU. * Records attributes of an individual CPU.
*/ */
struct cpuinfo_arm64 { struct cpuinfo_32bit {
struct cpu cpu;
struct kobject kobj;
u32 reg_ctr;
u32 reg_cntfrq;
u32 reg_dczid;
u32 reg_midr;
u32 reg_revidr;
u64 reg_id_aa64dfr0;
u64 reg_id_aa64dfr1;
u64 reg_id_aa64isar0;
u64 reg_id_aa64isar1;
u64 reg_id_aa64mmfr0;
u64 reg_id_aa64mmfr1;
u64 reg_id_aa64mmfr2;
u64 reg_id_aa64pfr0;
u64 reg_id_aa64pfr1;
u64 reg_id_aa64zfr0;
u32 reg_id_dfr0; u32 reg_id_dfr0;
u32 reg_id_dfr1; u32 reg_id_dfr1;
u32 reg_id_isar0; u32 reg_id_isar0;
@ -54,6 +35,30 @@ struct cpuinfo_arm64 {
u32 reg_mvfr0; u32 reg_mvfr0;
u32 reg_mvfr1; u32 reg_mvfr1;
u32 reg_mvfr2; u32 reg_mvfr2;
};
struct cpuinfo_arm64 {
struct cpu cpu;
struct kobject kobj;
u64 reg_ctr;
u64 reg_cntfrq;
u64 reg_dczid;
u64 reg_midr;
u64 reg_revidr;
u64 reg_gmid;
u64 reg_id_aa64dfr0;
u64 reg_id_aa64dfr1;
u64 reg_id_aa64isar0;
u64 reg_id_aa64isar1;
u64 reg_id_aa64mmfr0;
u64 reg_id_aa64mmfr1;
u64 reg_id_aa64mmfr2;
u64 reg_id_aa64pfr0;
u64 reg_id_aa64pfr1;
u64 reg_id_aa64zfr0;
struct cpuinfo_32bit aarch32;
/* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */ /* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */
u64 reg_zcr; u64 reg_zcr;

View File

@ -619,6 +619,13 @@ static inline bool id_aa64pfr0_sve(u64 pfr0)
return val > 0; return val > 0;
} }
static inline bool id_aa64pfr1_mte(u64 pfr1)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
return val >= ID_AA64PFR1_MTE;
}
void __init setup_cpu_features(void); void __init setup_cpu_features(void);
void check_local_cpu_capabilities(void); void check_local_cpu_capabilities(void);
@ -630,9 +637,15 @@ static inline bool cpu_supports_mixed_endian_el0(void)
return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1)); return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
} }
const struct cpumask *system_32bit_el0_cpumask(void);
DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
static inline bool system_supports_32bit_el0(void) static inline bool system_supports_32bit_el0(void)
{ {
return cpus_have_const_cap(ARM64_HAS_32BIT_EL0); u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
return static_branch_unlikely(&arm64_mismatched_32bit_el0) ||
id_aa64pfr0_32bit_el0(pfr0);
} }
static inline bool system_supports_4kb_granule(void) static inline bool system_supports_4kb_granule(void)

View File

@ -18,4 +18,39 @@ static inline int arm_cpuidle_suspend(int index)
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
#endif #endif
#ifdef CONFIG_ARM64_PSEUDO_NMI
#include <asm/arch_gicv3.h>
struct arm_cpuidle_irq_context {
unsigned long pmr;
unsigned long daif_bits;
};
#define arm_cpuidle_save_irq_context(__c) \
do { \
struct arm_cpuidle_irq_context *c = __c; \
if (system_uses_irq_prio_masking()) { \
c->daif_bits = read_sysreg(daif); \
write_sysreg(c->daif_bits | PSR_I_BIT | PSR_F_BIT, \
daif); \
c->pmr = gic_read_pmr(); \
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); \
} \
} while (0)
#define arm_cpuidle_restore_irq_context(__c) \
do { \
struct arm_cpuidle_irq_context *c = __c; \
if (system_uses_irq_prio_masking()) { \
gic_write_pmr(c->pmr); \
write_sysreg(c->daif_bits, daif); \
} \
} while (0)
#else
struct arm_cpuidle_irq_context { };
#define arm_cpuidle_save_irq_context(c) (void)c
#define arm_cpuidle_restore_irq_context(c) (void)c
#endif
#endif #endif

View File

@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
static inline void efi_capsule_flush_cache_range(void *addr, int size) static inline void efi_capsule_flush_cache_range(void *addr, int size)
{ {
__flush_dcache_area(addr, size); dcache_clean_inval_poc((unsigned long)addr, (unsigned long)addr + size);
} }
#endif /* _ASM_EFI_H */ #endif /* _ASM_EFI_H */

View File

@ -31,20 +31,35 @@ static inline u32 disr_to_esr(u64 disr)
return esr; return esr;
} }
asmlinkage void el1_sync_handler(struct pt_regs *regs); asmlinkage void handle_bad_stack(struct pt_regs *regs);
asmlinkage void el0_sync_handler(struct pt_regs *regs);
asmlinkage void el0_sync_compat_handler(struct pt_regs *regs);
asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs); asmlinkage void el1t_64_sync_handler(struct pt_regs *regs);
asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs); asmlinkage void el1t_64_irq_handler(struct pt_regs *regs);
asmlinkage void el1t_64_fiq_handler(struct pt_regs *regs);
asmlinkage void el1t_64_error_handler(struct pt_regs *regs);
asmlinkage void el1h_64_sync_handler(struct pt_regs *regs);
asmlinkage void el1h_64_irq_handler(struct pt_regs *regs);
asmlinkage void el1h_64_fiq_handler(struct pt_regs *regs);
asmlinkage void el1h_64_error_handler(struct pt_regs *regs);
asmlinkage void el0t_64_sync_handler(struct pt_regs *regs);
asmlinkage void el0t_64_irq_handler(struct pt_regs *regs);
asmlinkage void el0t_64_fiq_handler(struct pt_regs *regs);
asmlinkage void el0t_64_error_handler(struct pt_regs *regs);
asmlinkage void el0t_32_sync_handler(struct pt_regs *regs);
asmlinkage void el0t_32_irq_handler(struct pt_regs *regs);
asmlinkage void el0t_32_fiq_handler(struct pt_regs *regs);
asmlinkage void el0t_32_error_handler(struct pt_regs *regs);
asmlinkage void call_on_irq_stack(struct pt_regs *regs,
void (*func)(struct pt_regs *));
asmlinkage void enter_from_user_mode(void); asmlinkage void enter_from_user_mode(void);
asmlinkage void exit_to_user_mode(void); asmlinkage void exit_to_user_mode(void);
void arm64_enter_nmi(struct pt_regs *regs);
void arm64_exit_nmi(struct pt_regs *regs);
void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs); void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs);
void do_undefinstr(struct pt_regs *regs); void do_undefinstr(struct pt_regs *regs);
void do_bti(struct pt_regs *regs); void do_bti(struct pt_regs *regs);
asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
struct pt_regs *regs); struct pt_regs *regs);
void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs); void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
@ -57,4 +72,7 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr); void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
void do_serror(struct pt_regs *regs, unsigned int esr);
void panic_bad_stack(struct pt_regs *regs, unsigned int esr, unsigned long far);
#endif /* __ASM_EXCEPTION_H */ #endif /* __ASM_EXCEPTION_H */

View File

@ -69,7 +69,7 @@ static inline void *sve_pffr(struct thread_struct *thread)
extern void sve_save_state(void *state, u32 *pfpsr); extern void sve_save_state(void *state, u32 *pfpsr);
extern void sve_load_state(void const *state, u32 const *pfpsr, extern void sve_load_state(void const *state, u32 const *pfpsr,
unsigned long vq_minus_1); unsigned long vq_minus_1);
extern void sve_flush_live(void); extern void sve_flush_live(unsigned long vq_minus_1);
extern void sve_load_from_fpsimd_state(struct user_fpsimd_state const *state, extern void sve_load_from_fpsimd_state(struct user_fpsimd_state const *state,
unsigned long vq_minus_1); unsigned long vq_minus_1);
extern unsigned int sve_get_vl(void); extern unsigned int sve_get_vl(void);

View File

@ -213,8 +213,10 @@
mov v\nz\().16b, v\nz\().16b mov v\nz\().16b, v\nz\().16b
.endm .endm
.macro sve_flush .macro sve_flush_z
_for n, 0, 31, _sve_flush_z \n _for n, 0, 31, _sve_flush_z \n
.endm
.macro sve_flush_p_ffr
_for n, 0, 15, _sve_pfalse \n _for n, 0, 15, _sve_pfalse \n
_sve_wrffr 0 _sve_wrffr 0
.endm .endm

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_INSN_DEF_H
#define __ASM_INSN_DEF_H
/* A64 instructions are always 32 bits. */
#define AARCH64_INSN_SIZE 4
#endif /* __ASM_INSN_DEF_H */

View File

@ -10,7 +10,7 @@
#include <linux/build_bug.h> #include <linux/build_bug.h>
#include <linux/types.h> #include <linux/types.h>
#include <asm/alternative.h> #include <asm/insn-def.h>
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* /*
@ -30,6 +30,7 @@
*/ */
enum aarch64_insn_encoding_class { enum aarch64_insn_encoding_class {
AARCH64_INSN_CLS_UNKNOWN, /* UNALLOCATED */ AARCH64_INSN_CLS_UNKNOWN, /* UNALLOCATED */
AARCH64_INSN_CLS_SVE, /* SVE instructions */
AARCH64_INSN_CLS_DP_IMM, /* Data processing - immediate */ AARCH64_INSN_CLS_DP_IMM, /* Data processing - immediate */
AARCH64_INSN_CLS_DP_REG, /* Data processing - register */ AARCH64_INSN_CLS_DP_REG, /* Data processing - register */
AARCH64_INSN_CLS_DP_FPSIMD, /* Data processing - SIMD and FP */ AARCH64_INSN_CLS_DP_FPSIMD, /* Data processing - SIMD and FP */
@ -294,6 +295,12 @@ __AARCH64_INSN_FUNCS(adr, 0x9F000000, 0x10000000)
__AARCH64_INSN_FUNCS(adrp, 0x9F000000, 0x90000000) __AARCH64_INSN_FUNCS(adrp, 0x9F000000, 0x90000000)
__AARCH64_INSN_FUNCS(prfm, 0x3FC00000, 0x39800000) __AARCH64_INSN_FUNCS(prfm, 0x3FC00000, 0x39800000)
__AARCH64_INSN_FUNCS(prfm_lit, 0xFF000000, 0xD8000000) __AARCH64_INSN_FUNCS(prfm_lit, 0xFF000000, 0xD8000000)
__AARCH64_INSN_FUNCS(store_imm, 0x3FC00000, 0x39000000)
__AARCH64_INSN_FUNCS(load_imm, 0x3FC00000, 0x39400000)
__AARCH64_INSN_FUNCS(store_pre, 0x3FE00C00, 0x38000C00)
__AARCH64_INSN_FUNCS(load_pre, 0x3FE00C00, 0x38400C00)
__AARCH64_INSN_FUNCS(store_post, 0x3FE00C00, 0x38000400)
__AARCH64_INSN_FUNCS(load_post, 0x3FE00C00, 0x38400400)
__AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800)
__AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000) __AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000)
__AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800)
@ -302,6 +309,8 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
__AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
__AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
__AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
__AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000)
__AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000)
__AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000) __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
__AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000) __AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000)
__AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000) __AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000)
@ -334,6 +343,7 @@ __AARCH64_INSN_FUNCS(rev64, 0x7FFFFC00, 0x5AC00C00)
__AARCH64_INSN_FUNCS(and, 0x7F200000, 0x0A000000) __AARCH64_INSN_FUNCS(and, 0x7F200000, 0x0A000000)
__AARCH64_INSN_FUNCS(bic, 0x7F200000, 0x0A200000) __AARCH64_INSN_FUNCS(bic, 0x7F200000, 0x0A200000)
__AARCH64_INSN_FUNCS(orr, 0x7F200000, 0x2A000000) __AARCH64_INSN_FUNCS(orr, 0x7F200000, 0x2A000000)
__AARCH64_INSN_FUNCS(mov_reg, 0x7FE0FFE0, 0x2A0003E0)
__AARCH64_INSN_FUNCS(orn, 0x7F200000, 0x2A200000) __AARCH64_INSN_FUNCS(orn, 0x7F200000, 0x2A200000)
__AARCH64_INSN_FUNCS(eor, 0x7F200000, 0x4A000000) __AARCH64_INSN_FUNCS(eor, 0x7F200000, 0x4A000000)
__AARCH64_INSN_FUNCS(eon, 0x7F200000, 0x4A200000) __AARCH64_INSN_FUNCS(eon, 0x7F200000, 0x4A200000)
@ -368,6 +378,14 @@ __AARCH64_INSN_FUNCS(eret_auth, 0xFFFFFBFF, 0xD69F0BFF)
__AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000) __AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000)
__AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F) __AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F)
__AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000) __AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
__AARCH64_INSN_FUNCS(dmb, 0xFFFFF0FF, 0xD50330BF)
__AARCH64_INSN_FUNCS(dsb_base, 0xFFFFF0FF, 0xD503309F)
__AARCH64_INSN_FUNCS(dsb_nxs, 0xFFFFF3FF, 0xD503323F)
__AARCH64_INSN_FUNCS(isb, 0xFFFFF0FF, 0xD50330DF)
__AARCH64_INSN_FUNCS(sb, 0xFFFFFFFF, 0xD50330FF)
__AARCH64_INSN_FUNCS(clrex, 0xFFFFF0FF, 0xD503305F)
__AARCH64_INSN_FUNCS(ssbb, 0xFFFFFFFF, 0xD503309F)
__AARCH64_INSN_FUNCS(pssbb, 0xFFFFFFFF, 0xD503349F)
#undef __AARCH64_INSN_FUNCS #undef __AARCH64_INSN_FUNCS
@ -379,8 +397,47 @@ static inline bool aarch64_insn_is_adr_adrp(u32 insn)
return aarch64_insn_is_adr(insn) || aarch64_insn_is_adrp(insn); return aarch64_insn_is_adr(insn) || aarch64_insn_is_adrp(insn);
} }
int aarch64_insn_read(void *addr, u32 *insnp); static inline bool aarch64_insn_is_dsb(u32 insn)
int aarch64_insn_write(void *addr, u32 insn); {
return aarch64_insn_is_dsb_base(insn) || aarch64_insn_is_dsb_nxs(insn);
}
static inline bool aarch64_insn_is_barrier(u32 insn)
{
return aarch64_insn_is_dmb(insn) || aarch64_insn_is_dsb(insn) ||
aarch64_insn_is_isb(insn) || aarch64_insn_is_sb(insn) ||
aarch64_insn_is_clrex(insn) || aarch64_insn_is_ssbb(insn) ||
aarch64_insn_is_pssbb(insn);
}
static inline bool aarch64_insn_is_store_single(u32 insn)
{
return aarch64_insn_is_store_imm(insn) ||
aarch64_insn_is_store_pre(insn) ||
aarch64_insn_is_store_post(insn);
}
static inline bool aarch64_insn_is_store_pair(u32 insn)
{
return aarch64_insn_is_stp(insn) ||
aarch64_insn_is_stp_pre(insn) ||
aarch64_insn_is_stp_post(insn);
}
static inline bool aarch64_insn_is_load_single(u32 insn)
{
return aarch64_insn_is_load_imm(insn) ||
aarch64_insn_is_load_pre(insn) ||
aarch64_insn_is_load_post(insn);
}
static inline bool aarch64_insn_is_load_pair(u32 insn)
{
return aarch64_insn_is_ldp(insn) ||
aarch64_insn_is_ldp_pre(insn) ||
aarch64_insn_is_ldp_post(insn);
}
enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
bool aarch64_insn_uses_literal(u32 insn); bool aarch64_insn_uses_literal(u32 insn);
bool aarch64_insn_is_branch(u32 insn); bool aarch64_insn_is_branch(u32 insn);
@ -487,9 +544,6 @@ u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
s32 aarch64_get_branch_offset(u32 insn); s32 aarch64_get_branch_offset(u32 insn);
u32 aarch64_set_branch_offset(u32 insn, s32 offset); u32 aarch64_set_branch_offset(u32 insn, s32 offset);
int aarch64_insn_patch_text_nosync(void *addr, u32 insn);
int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt);
s32 aarch64_insn_adrp_get_offset(u32 insn); s32 aarch64_insn_adrp_get_offset(u32 insn);
u32 aarch64_insn_adrp_set_offset(u32 insn, s32 offset); u32 aarch64_insn_adrp_set_offset(u32 insn, s32 offset);
@ -506,6 +560,7 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn);
typedef bool (pstate_check_t)(unsigned long); typedef bool (pstate_check_t)(unsigned long);
extern pstate_check_t * const aarch32_opcode_cond_checks[16]; extern pstate_check_t * const aarch32_opcode_cond_checks[16];
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ASM_INSN_H */ #endif /* __ASM_INSN_H */

View File

@ -18,9 +18,9 @@
* 64K (section size = 512M). * 64K (section size = 512M).
*/ */
#ifdef CONFIG_ARM64_4K_PAGES #ifdef CONFIG_ARM64_4K_PAGES
#define ARM64_SWAPPER_USES_SECTION_MAPS 1 #define ARM64_KERNEL_USES_PMD_MAPS 1
#else #else
#define ARM64_SWAPPER_USES_SECTION_MAPS 0 #define ARM64_KERNEL_USES_PMD_MAPS 0
#endif #endif
/* /*
@ -33,7 +33,7 @@
* VA range, so pages required to map highest possible PA are reserved in all * VA range, so pages required to map highest possible PA are reserved in all
* cases. * cases.
*/ */
#if ARM64_SWAPPER_USES_SECTION_MAPS #if ARM64_KERNEL_USES_PMD_MAPS
#define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1)
#define IDMAP_PGTABLE_LEVELS (ARM64_HW_PGTABLE_LEVELS(PHYS_MASK_SHIFT) - 1) #define IDMAP_PGTABLE_LEVELS (ARM64_HW_PGTABLE_LEVELS(PHYS_MASK_SHIFT) - 1)
#else #else
@ -90,9 +90,9 @@
#define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE) #define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE)
/* Initial memory map size */ /* Initial memory map size */
#if ARM64_SWAPPER_USES_SECTION_MAPS #if ARM64_KERNEL_USES_PMD_MAPS
#define SWAPPER_BLOCK_SHIFT SECTION_SHIFT #define SWAPPER_BLOCK_SHIFT PMD_SHIFT
#define SWAPPER_BLOCK_SIZE SECTION_SIZE #define SWAPPER_BLOCK_SIZE PMD_SIZE
#define SWAPPER_TABLE_SHIFT PUD_SHIFT #define SWAPPER_TABLE_SHIFT PUD_SHIFT
#else #else
#define SWAPPER_BLOCK_SHIFT PAGE_SHIFT #define SWAPPER_BLOCK_SHIFT PAGE_SHIFT
@ -100,16 +100,13 @@
#define SWAPPER_TABLE_SHIFT PMD_SHIFT #define SWAPPER_TABLE_SHIFT PMD_SHIFT
#endif #endif
/* The size of the initial kernel direct mapping */
#define SWAPPER_INIT_MAP_SIZE (_AC(1, UL) << SWAPPER_TABLE_SHIFT)
/* /*
* Initial memory map attributes. * Initial memory map attributes.
*/ */
#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) #define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
#if ARM64_SWAPPER_USES_SECTION_MAPS #if ARM64_KERNEL_USES_PMD_MAPS
#define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) #define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
#else #else
#define SWAPPER_MM_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) #define SWAPPER_MM_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
@ -125,7 +122,7 @@
#if defined(CONFIG_ARM64_4K_PAGES) #if defined(CONFIG_ARM64_4K_PAGES)
#define ARM64_MEMSTART_SHIFT PUD_SHIFT #define ARM64_MEMSTART_SHIFT PUD_SHIFT
#elif defined(CONFIG_ARM64_16K_PAGES) #elif defined(CONFIG_ARM64_16K_PAGES)
#define ARM64_MEMSTART_SHIFT (PMD_SHIFT + 5) #define ARM64_MEMSTART_SHIFT CONT_PMD_SHIFT
#else #else
#define ARM64_MEMSTART_SHIFT PMD_SHIFT #define ARM64_MEMSTART_SHIFT PMD_SHIFT
#endif #endif

View File

@ -8,6 +8,7 @@
#define __ARM_KVM_ASM_H__ #define __ARM_KVM_ASM_H__
#include <asm/hyp_image.h> #include <asm/hyp_image.h>
#include <asm/insn.h>
#include <asm/virt.h> #include <asm/virt.h>
#define ARM_EXIT_WITH_SERROR_BIT 31 #define ARM_EXIT_WITH_SERROR_BIT 31

View File

@ -180,7 +180,8 @@ static inline void *__kvm_vector_slot2addr(void *base,
struct kvm; struct kvm;
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) #define kvm_flush_dcache_to_poc(a,l) \
dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{ {
@ -208,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
{ {
if (icache_is_aliasing()) { if (icache_is_aliasing()) {
/* any kind of VIPT cache */ /* any kind of VIPT cache */
__flush_icache_all(); icache_inval_all_pou();
} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
void *va = page_address(pfn_to_page(pfn)); void *va = page_address(pfn_to_page(pfn));
invalidate_icache_range((unsigned long)va, icache_inval_pou((unsigned long)va,
(unsigned long)va + size); (unsigned long)va + size);
} }
} }

View File

@ -56,8 +56,16 @@
SYM_FUNC_START_ALIAS(__pi_##x); \ SYM_FUNC_START_ALIAS(__pi_##x); \
SYM_FUNC_START_WEAK(x) SYM_FUNC_START_WEAK(x)
#define SYM_FUNC_START_WEAK_ALIAS_PI(x) \
SYM_FUNC_START_ALIAS(__pi_##x); \
SYM_START(x, SYM_L_WEAK, SYM_A_ALIGN)
#define SYM_FUNC_END_PI(x) \ #define SYM_FUNC_END_PI(x) \
SYM_FUNC_END(x); \ SYM_FUNC_END(x); \
SYM_FUNC_END_ALIAS(__pi_##x) SYM_FUNC_END_ALIAS(__pi_##x)
#define SYM_FUNC_END_ALIAS_PI(x) \
SYM_FUNC_END_ALIAS(x); \
SYM_FUNC_END_ALIAS(__pi_##x)
#endif #endif

View File

@ -135,10 +135,8 @@
#define MT_NORMAL 0 #define MT_NORMAL 0
#define MT_NORMAL_TAGGED 1 #define MT_NORMAL_TAGGED 1
#define MT_NORMAL_NC 2 #define MT_NORMAL_NC 2
#define MT_NORMAL_WT 3 #define MT_DEVICE_nGnRnE 3
#define MT_DEVICE_nGnRnE 4 #define MT_DEVICE_nGnRE 4
#define MT_DEVICE_nGnRE 5
#define MT_DEVICE_GRE 6
/* /*
* Memory types for Stage-2 translation * Memory types for Stage-2 translation

View File

@ -177,9 +177,9 @@ static inline void update_saved_ttbr0(struct task_struct *tsk,
return; return;
if (mm == &init_mm) if (mm == &init_mm)
ttbr = __pa_symbol(reserved_pg_dir); ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
else else
ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48; ttbr = phys_to_ttbr(virt_to_phys(mm->pgd)) | ASID(mm) << 48;
WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr); WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
} }

View File

@ -1,7 +1,20 @@
#ifdef CONFIG_ARM64_MODULE_PLTS
SECTIONS { SECTIONS {
#ifdef CONFIG_ARM64_MODULE_PLTS
.plt 0 (NOLOAD) : { BYTE(0) } .plt 0 (NOLOAD) : { BYTE(0) }
.init.plt 0 (NOLOAD) : { BYTE(0) } .init.plt 0 (NOLOAD) : { BYTE(0) }
.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) } .text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
}
#endif #endif
#ifdef CONFIG_KASAN_SW_TAGS
/*
* Outlined checks go into comdat-deduplicated sections named .text.hot.
* Because they are in comdats they are not combined by the linker and
* we otherwise end up with multiple sections with the same .text.hot
* name in the .ko file. The kernel module loader warns if it sees
* multiple sections with the same name so we use this sections
* directive to force them into a single section and silence the
* warning.
*/
.text.hot : { *(.text.hot) }
#endif
}

View File

@ -48,43 +48,84 @@ static inline u8 mte_get_random_tag(void)
return mte_get_ptr_tag(addr); return mte_get_ptr_tag(addr);
} }
static inline u64 __stg_post(u64 p)
{
asm volatile(__MTE_PREAMBLE "stg %0, [%0], #16"
: "+r"(p)
:
: "memory");
return p;
}
static inline u64 __stzg_post(u64 p)
{
asm volatile(__MTE_PREAMBLE "stzg %0, [%0], #16"
: "+r"(p)
:
: "memory");
return p;
}
static inline void __dc_gva(u64 p)
{
asm volatile(__MTE_PREAMBLE "dc gva, %0" : : "r"(p) : "memory");
}
static inline void __dc_gzva(u64 p)
{
asm volatile(__MTE_PREAMBLE "dc gzva, %0" : : "r"(p) : "memory");
}
/* /*
* Assign allocation tags for a region of memory based on the pointer tag. * Assign allocation tags for a region of memory based on the pointer tag.
* Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
* size must be non-zero and MTE_GRANULE_SIZE aligned. * size must be MTE_GRANULE_SIZE aligned.
*/ */
static inline void mte_set_mem_tag_range(void *addr, size_t size, static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
u8 tag, bool init) bool init)
{ {
u64 curr, end; u64 curr, mask, dczid_bs, end1, end2, end3;
if (!size) /* Read DC G(Z)VA block size from the system register. */
return; dczid_bs = 4ul << (read_cpuid(DCZID_EL0) & 0xf);
curr = (u64)__tag_set(addr, tag); curr = (u64)__tag_set(addr, tag);
end = curr + size; mask = dczid_bs - 1;
/* STG/STZG up to the end of the first block. */
end1 = curr | mask;
end3 = curr + size;
/* DC GVA / GZVA in [end1, end2) */
end2 = end3 & ~mask;
/* /*
* 'asm volatile' is required to prevent the compiler to move * The following code uses STG on the first DC GVA block even if the
* the statement outside of the loop. * start address is aligned - it appears to be faster than an alignment
* check + conditional branch. Also, if the range size is at least 2 DC
* GVA blocks, the first two loops can use post-condition to save one
* branch each.
*/ */
if (init) { #define SET_MEMTAG_RANGE(stg_post, dc_gva) \
do { do { \
asm volatile(__MTE_PREAMBLE "stzg %0, [%0]" if (size >= 2 * dczid_bs) { \
: do { \
: "r" (curr) curr = stg_post(curr); \
: "memory"); } while (curr < end1); \
curr += MTE_GRANULE_SIZE; \
} while (curr != end); do { \
} else { dc_gva(curr); \
do { curr += dczid_bs; \
asm volatile(__MTE_PREAMBLE "stg %0, [%0]" } while (curr < end2); \
: } \
: "r" (curr) \
: "memory"); while (curr < end3) \
curr += MTE_GRANULE_SIZE; curr = stg_post(curr); \
} while (curr != end); } while (0)
}
if (init)
SET_MEMTAG_RANGE(__stzg_post, __dc_gzva);
else
SET_MEMTAG_RANGE(__stg_post, __dc_gva);
#undef SET_MEMTAG_RANGE
} }
void mte_enable_kernel_sync(void); void mte_enable_kernel_sync(void);

View File

@ -37,6 +37,7 @@ void mte_free_tag_storage(char *storage);
/* track which pages have valid allocation tags */ /* track which pages have valid allocation tags */
#define PG_mte_tagged PG_arch_2 #define PG_mte_tagged PG_arch_2
void mte_zero_clear_page_tags(void *addr);
void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_sync_tags(pte_t *ptep, pte_t pte);
void mte_copy_page_tags(void *kto, const void *kfrom); void mte_copy_page_tags(void *kto, const void *kfrom);
void mte_thread_init_user(void); void mte_thread_init_user(void);
@ -53,6 +54,9 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request,
/* unused if !CONFIG_ARM64_MTE, silence the compiler */ /* unused if !CONFIG_ARM64_MTE, silence the compiler */
#define PG_mte_tagged 0 #define PG_mte_tagged 0
static inline void mte_zero_clear_page_tags(void *addr)
{
}
static inline void mte_sync_tags(pte_t *ptep, pte_t pte) static inline void mte_sync_tags(pte_t *ptep, pte_t pte)
{ {
} }

View File

@ -13,6 +13,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/personality.h> /* for READ_IMPLIES_EXEC */ #include <linux/personality.h> /* for READ_IMPLIES_EXEC */
#include <linux/types.h> /* for gfp_t */
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
struct page; struct page;
@ -28,9 +29,12 @@ void copy_user_highpage(struct page *to, struct page *from,
void copy_highpage(struct page *to, struct page *from); void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE #define __HAVE_ARCH_COPY_HIGHPAGE
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) unsigned long vaddr);
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
void tag_clear_highpage(struct page *to);
#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE
#define clear_user_page(page, vaddr, pg) clear_page(page) #define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_PATCHING_H
#define __ASM_PATCHING_H
#include <linux/types.h>
int aarch64_insn_read(void *addr, u32 *insnp);
int aarch64_insn_write(void *addr, u32 insn);
int aarch64_insn_patch_text_nosync(void *addr, u32 insn);
int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt);
#endif /* __ASM_PATCHING_H */

View File

@ -239,6 +239,11 @@
/* PMMIR_EL1.SLOTS mask */ /* PMMIR_EL1.SLOTS mask */
#define ARMV8_PMU_SLOTS_MASK 0xff #define ARMV8_PMU_SLOTS_MASK 0xff
#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
struct pt_regs; struct pt_regs;
extern unsigned long perf_instruction_pointer(struct pt_regs *regs); extern unsigned long perf_instruction_pointer(struct pt_regs *regs);

View File

@ -71,13 +71,6 @@
#define PGDIR_MASK (~(PGDIR_SIZE-1)) #define PGDIR_MASK (~(PGDIR_SIZE-1))
#define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT)) #define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT))
/*
* Section address mask and size definitions.
*/
#define SECTION_SHIFT PMD_SHIFT
#define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT)
#define SECTION_MASK (~(SECTION_SIZE-1))
/* /*
* Contiguous page definitions. * Contiguous page definitions.
*/ */

View File

@ -55,7 +55,6 @@ extern bool arm64_use_ng_mappings;
#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC)) #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT))
#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL)) #define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
#define PROT_NORMAL_TAGGED (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_TAGGED)) #define PROT_NORMAL_TAGGED (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_TAGGED))

View File

@ -511,13 +511,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
#define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (!(pmd_val(pmd) & PMD_TABLE_BIT))
#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ #define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_TABLE) PMD_TYPE_TABLE)
#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_SECT) PMD_TYPE_SECT)
#define pmd_leaf(pmd) pmd_sect(pmd) #define pmd_leaf(pmd) pmd_sect(pmd)
#define pmd_bad(pmd) (!pmd_table(pmd))
#define pmd_leaf_size(pmd) (pmd_cont(pmd) ? CONT_PMD_SIZE : PMD_SIZE) #define pmd_leaf_size(pmd) (pmd_cont(pmd) ? CONT_PMD_SIZE : PMD_SIZE)
#define pte_leaf_size(pte) (pte_cont(pte) ? CONT_PTE_SIZE : PAGE_SIZE) #define pte_leaf_size(pte) (pte_cont(pte) ? CONT_PTE_SIZE : PAGE_SIZE)
@ -604,7 +603,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e)) pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
#define pud_none(pud) (!pud_val(pud)) #define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_bad(pud) (!pud_table(pud))
#define pud_present(pud) pte_present(pud_pte(pud)) #define pud_present(pud) pte_present(pud_pte(pud))
#define pud_leaf(pud) pud_sect(pud) #define pud_leaf(pud) pud_sect(pud)
#define pud_valid(pud) pte_valid(pud_pte(pud)) #define pud_valid(pud) pte_valid(pud_pte(pud))

View File

@ -31,10 +31,6 @@ struct ptrauth_keys_user {
struct ptrauth_key apga; struct ptrauth_key apga;
}; };
struct ptrauth_keys_kernel {
struct ptrauth_key apia;
};
#define __ptrauth_key_install_nosync(k, v) \ #define __ptrauth_key_install_nosync(k, v) \
do { \ do { \
struct ptrauth_key __pki_v = (v); \ struct ptrauth_key __pki_v = (v); \
@ -42,6 +38,29 @@ do { \
write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
} while (0) } while (0)
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
struct ptrauth_keys_kernel {
struct ptrauth_key apia;
};
static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
{
if (system_supports_address_auth())
get_random_bytes(&keys->apia, sizeof(keys->apia));
}
static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys)
{
if (!system_supports_address_auth())
return;
__ptrauth_key_install_nosync(APIA, keys->apia);
isb();
}
#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
static inline void ptrauth_keys_install_user(struct ptrauth_keys_user *keys) static inline void ptrauth_keys_install_user(struct ptrauth_keys_user *keys)
{ {
if (system_supports_address_auth()) { if (system_supports_address_auth()) {
@ -69,21 +88,6 @@ static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
ptrauth_keys_install_user(keys); ptrauth_keys_install_user(keys);
} }
static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
{
if (system_supports_address_auth())
get_random_bytes(&keys->apia, sizeof(keys->apia));
}
static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys)
{
if (!system_supports_address_auth())
return;
__ptrauth_key_install_nosync(APIA, keys->apia);
isb();
}
extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
extern int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys, extern int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
@ -121,11 +125,6 @@ static __always_inline void ptrauth_enable(void)
#define ptrauth_thread_switch_user(tsk) \ #define ptrauth_thread_switch_user(tsk) \
ptrauth_keys_install_user(&(tsk)->thread.keys_user) ptrauth_keys_install_user(&(tsk)->thread.keys_user)
#define ptrauth_thread_init_kernel(tsk) \
ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
#define ptrauth_thread_switch_kernel(tsk) \
ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
#else /* CONFIG_ARM64_PTR_AUTH */ #else /* CONFIG_ARM64_PTR_AUTH */
#define ptrauth_enable() #define ptrauth_enable()
#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
@ -134,11 +133,19 @@ static __always_inline void ptrauth_enable(void)
#define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_strip_insn_pac(lr) (lr)
#define ptrauth_suspend_exit() #define ptrauth_suspend_exit()
#define ptrauth_thread_init_user() #define ptrauth_thread_init_user()
#define ptrauth_thread_init_kernel(tsk)
#define ptrauth_thread_switch_user(tsk) #define ptrauth_thread_switch_user(tsk)
#define ptrauth_thread_switch_kernel(tsk)
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
#define ptrauth_thread_init_kernel(tsk) \
ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
#define ptrauth_thread_switch_kernel(tsk) \
ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
#else
#define ptrauth_thread_init_kernel(tsk)
#define ptrauth_thread_switch_kernel(tsk)
#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
#define PR_PAC_ENABLED_KEYS_MASK \ #define PR_PAC_ENABLED_KEYS_MASK \
(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY) (PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)

View File

@ -148,8 +148,10 @@ struct thread_struct {
struct debug_info debug; /* debugging */ struct debug_info debug; /* debugging */
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys_user keys_user; struct ptrauth_keys_user keys_user;
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
struct ptrauth_keys_kernel keys_kernel; struct ptrauth_keys_kernel keys_kernel;
#endif #endif
#endif
#ifdef CONFIG_ARM64_MTE #ifdef CONFIG_ARM64_MTE
u64 gcr_user_excl; u64 gcr_user_excl;
#endif #endif
@ -257,8 +259,6 @@ void set_task_sctlr_el1(u64 sctlr);
extern struct task_struct *cpu_switch_to(struct task_struct *prev, extern struct task_struct *cpu_switch_to(struct task_struct *prev,
struct task_struct *next); struct task_struct *next);
asmlinkage void arm64_preempt_schedule_irq(void);
#define task_pt_regs(p) \ #define task_pt_regs(p) \
((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
@ -329,13 +329,13 @@ long get_tagged_addr_ctrl(struct task_struct *task);
* of header definitions for the use of task_stack_page. * of header definitions for the use of task_stack_page.
*/ */
#define current_top_of_stack() \ #define current_top_of_stack() \
({ \ ({ \
struct stack_info _info; \ struct stack_info _info; \
BUG_ON(!on_accessible_stack(current, current_stack_pointer, &_info)); \ BUG_ON(!on_accessible_stack(current, current_stack_pointer, 1, &_info)); \
_info.high; \ _info.high; \
}) })
#define on_thread_stack() (on_task_stack(current, current_stack_pointer, NULL)) #define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1, NULL))
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ASM_PROCESSOR_H */ #endif /* __ASM_PROCESSOR_H */

View File

@ -9,18 +9,18 @@
#ifdef CONFIG_SHADOW_CALL_STACK #ifdef CONFIG_SHADOW_CALL_STACK
scs_sp .req x18 scs_sp .req x18
.macro scs_load tsk, tmp .macro scs_load tsk
ldr scs_sp, [\tsk, #TSK_TI_SCS_SP] ldr scs_sp, [\tsk, #TSK_TI_SCS_SP]
.endm .endm
.macro scs_save tsk, tmp .macro scs_save tsk
str scs_sp, [\tsk, #TSK_TI_SCS_SP] str scs_sp, [\tsk, #TSK_TI_SCS_SP]
.endm .endm
#else #else
.macro scs_load tsk, tmp .macro scs_load tsk
.endm .endm
.macro scs_save tsk, tmp .macro scs_save tsk
.endm .endm
#endif /* CONFIG_SHADOW_CALL_STACK */ #endif /* CONFIG_SHADOW_CALL_STACK */

View File

@ -37,13 +37,17 @@ struct sdei_registered_event;
asmlinkage unsigned long __sdei_handler(struct pt_regs *regs, asmlinkage unsigned long __sdei_handler(struct pt_regs *regs,
struct sdei_registered_event *arg); struct sdei_registered_event *arg);
unsigned long do_sdei_event(struct pt_regs *regs,
struct sdei_registered_event *arg);
unsigned long sdei_arch_get_entry_point(int conduit); unsigned long sdei_arch_get_entry_point(int conduit);
#define sdei_arch_get_entry_point(x) sdei_arch_get_entry_point(x) #define sdei_arch_get_entry_point(x) sdei_arch_get_entry_point(x)
struct stack_info; struct stack_info;
bool _on_sdei_stack(unsigned long sp, struct stack_info *info); bool _on_sdei_stack(unsigned long sp, unsigned long size,
static inline bool on_sdei_stack(unsigned long sp, struct stack_info *info);
static inline bool on_sdei_stack(unsigned long sp, unsigned long size,
struct stack_info *info) struct stack_info *info)
{ {
if (!IS_ENABLED(CONFIG_VMAP_STACK)) if (!IS_ENABLED(CONFIG_VMAP_STACK))
@ -51,7 +55,7 @@ static inline bool on_sdei_stack(unsigned long sp,
if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE))
return false; return false;
if (in_nmi()) if (in_nmi())
return _on_sdei_stack(sp, info); return _on_sdei_stack(sp, size, info);
return false; return false;
} }

View File

@ -73,12 +73,10 @@ asmlinkage void secondary_start_kernel(void);
/* /*
* Initial data for bringing up a secondary CPU. * Initial data for bringing up a secondary CPU.
* @stack - sp for the secondary CPU
* @status - Result passed back from the secondary CPU to * @status - Result passed back from the secondary CPU to
* indicate failure. * indicate failure.
*/ */
struct secondary_data { struct secondary_data {
void *stack;
struct task_struct *task; struct task_struct *task;
long status; long status;
}; };

View File

@ -69,14 +69,14 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr);
static inline bool on_stack(unsigned long sp, unsigned long low, static inline bool on_stack(unsigned long sp, unsigned long size,
unsigned long high, enum stack_type type, unsigned long low, unsigned long high,
struct stack_info *info) enum stack_type type, struct stack_info *info)
{ {
if (!low) if (!low)
return false; return false;
if (sp < low || sp >= high) if (sp < low || sp + size < sp || sp + size > high)
return false; return false;
if (info) { if (info) {
@ -87,38 +87,38 @@ static inline bool on_stack(unsigned long sp, unsigned long low,
return true; return true;
} }
static inline bool on_irq_stack(unsigned long sp, static inline bool on_irq_stack(unsigned long sp, unsigned long size,
struct stack_info *info) struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr); unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr);
unsigned long high = low + IRQ_STACK_SIZE; unsigned long high = low + IRQ_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_IRQ, info); return on_stack(sp, size, low, high, STACK_TYPE_IRQ, info);
} }
static inline bool on_task_stack(const struct task_struct *tsk, static inline bool on_task_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long sp, unsigned long size,
struct stack_info *info) struct stack_info *info)
{ {
unsigned long low = (unsigned long)task_stack_page(tsk); unsigned long low = (unsigned long)task_stack_page(tsk);
unsigned long high = low + THREAD_SIZE; unsigned long high = low + THREAD_SIZE;
return on_stack(sp, low, high, STACK_TYPE_TASK, info); return on_stack(sp, size, low, high, STACK_TYPE_TASK, info);
} }
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
static inline bool on_overflow_stack(unsigned long sp, static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info) struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack); unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack);
unsigned long high = low + OVERFLOW_STACK_SIZE; unsigned long high = low + OVERFLOW_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_OVERFLOW, info); return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
} }
#else #else
static inline bool on_overflow_stack(unsigned long sp, static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info) { return false; } struct stack_info *info) { return false; }
#endif #endif
@ -128,21 +128,21 @@ static inline bool on_overflow_stack(unsigned long sp,
* context. * context.
*/ */
static inline bool on_accessible_stack(const struct task_struct *tsk, static inline bool on_accessible_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long sp, unsigned long size,
struct stack_info *info) struct stack_info *info)
{ {
if (info) if (info)
info->type = STACK_TYPE_UNKNOWN; info->type = STACK_TYPE_UNKNOWN;
if (on_task_stack(tsk, sp, info)) if (on_task_stack(tsk, sp, size, info))
return true; return true;
if (tsk != current || preemptible()) if (tsk != current || preemptible())
return false; return false;
if (on_irq_stack(sp, info)) if (on_irq_stack(sp, size, info))
return true; return true;
if (on_overflow_stack(sp, info)) if (on_overflow_stack(sp, size, info))
return true; return true;
if (on_sdei_stack(sp, info)) if (on_sdei_stack(sp, size, info))
return true; return true;
return false; return false;

View File

@ -703,9 +703,7 @@
/* MAIR_ELx memory attributes (used by Linux) */ /* MAIR_ELx memory attributes (used by Linux) */
#define MAIR_ATTR_DEVICE_nGnRnE UL(0x00) #define MAIR_ATTR_DEVICE_nGnRnE UL(0x00)
#define MAIR_ATTR_DEVICE_nGnRE UL(0x04) #define MAIR_ATTR_DEVICE_nGnRE UL(0x04)
#define MAIR_ATTR_DEVICE_GRE UL(0x0c)
#define MAIR_ATTR_NORMAL_NC UL(0x44) #define MAIR_ATTR_NORMAL_NC UL(0x44)
#define MAIR_ATTR_NORMAL_WT UL(0xbb)
#define MAIR_ATTR_NORMAL_TAGGED UL(0xf0) #define MAIR_ATTR_NORMAL_TAGGED UL(0xf0)
#define MAIR_ATTR_NORMAL UL(0xff) #define MAIR_ATTR_NORMAL UL(0xff)
#define MAIR_ATTR_MASK UL(0xff) #define MAIR_ATTR_MASK UL(0xff)

View File

@ -28,6 +28,10 @@ static void tlb_flush(struct mmu_gather *tlb);
*/ */
static inline int tlb_get_level(struct mmu_gather *tlb) static inline int tlb_get_level(struct mmu_gather *tlb)
{ {
/* The TTL field is only valid for the leaf entry. */
if (tlb->freed_tables)
return 0;
if (tlb->cleared_ptes && !(tlb->cleared_pmds || if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
tlb->cleared_puds || tlb->cleared_puds ||
tlb->cleared_p4ds)) tlb->cleared_p4ds))

View File

@ -14,15 +14,22 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_syscall.o = -fstack-protector -fstack-protector-strong CFLAGS_REMOVE_syscall.o = -fstack-protector -fstack-protector-strong
CFLAGS_syscall.o += -fno-stack-protector CFLAGS_syscall.o += -fno-stack-protector
# It's not safe to invoke KCOV when portions of the kernel environment aren't
# available or are out-of-sync with HW state. Since `noinstr` doesn't always
# inhibit KCOV instrumentation, disable it for the entire compilation unit.
KCOV_INSTRUMENT_entry.o := n
KCOV_INSTRUMENT_idle.o := n
# Object file lists. # Object file lists.
obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
entry-common.o entry-fpsimd.o process.o ptrace.o \ entry-common.o entry-fpsimd.o process.o ptrace.o \
setup.o signal.o sys.o stacktrace.o time.o traps.o \ setup.o signal.o sys.o stacktrace.o time.o traps.o \
io.o vdso.o hyp-stub.o psci.o cpu_ops.o insn.o \ io.o vdso.o hyp-stub.o psci.o cpu_ops.o \
return_address.o cpuinfo.o cpu_errata.o \ return_address.o cpuinfo.o cpu_errata.o \
cpufeature.o alternative.o cacheinfo.o \ cpufeature.o alternative.o cacheinfo.o \
smp.o smp_spin_table.o topology.o smccc-call.o \ smp.o smp_spin_table.o topology.o smccc-call.o \
syscall.o proton-pack.o idreg-override.o syscall.o proton-pack.o idreg-override.o idle.o \
patching.o
targets += efi-entry.o targets += efi-entry.o

View File

@ -239,6 +239,18 @@ done:
} }
} }
static pgprot_t __acpi_get_writethrough_mem_attribute(void)
{
/*
* Although UEFI specifies the use of Normal Write-through for
* EFI_MEMORY_WT, it is seldom used in practice and not implemented
* by most (all?) CPUs. Rather than allocate a MAIR just for this
* purpose, emit a warning and use Normal Non-cacheable instead.
*/
pr_warn_once("No MAIR allocation for EFI_MEMORY_WT; treating as Normal Non-cacheable\n");
return __pgprot(PROT_NORMAL_NC);
}
pgprot_t __acpi_get_mem_attribute(phys_addr_t addr) pgprot_t __acpi_get_mem_attribute(phys_addr_t addr)
{ {
/* /*
@ -246,7 +258,7 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr)
* types" of UEFI 2.5 section 2.3.6.1, each EFI memory type is * types" of UEFI 2.5 section 2.3.6.1, each EFI memory type is
* mapped to a corresponding MAIR attribute encoding. * mapped to a corresponding MAIR attribute encoding.
* The EFI memory attribute advises all possible capabilities * The EFI memory attribute advises all possible capabilities
* of a memory region. We use the most efficient capability. * of a memory region.
*/ */
u64 attr; u64 attr;
@ -254,10 +266,10 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr)
attr = efi_mem_attributes(addr); attr = efi_mem_attributes(addr);
if (attr & EFI_MEMORY_WB) if (attr & EFI_MEMORY_WB)
return PAGE_KERNEL; return PAGE_KERNEL;
if (attr & EFI_MEMORY_WT)
return __pgprot(PROT_NORMAL_WT);
if (attr & EFI_MEMORY_WC) if (attr & EFI_MEMORY_WC)
return __pgprot(PROT_NORMAL_NC); return __pgprot(PROT_NORMAL_NC);
if (attr & EFI_MEMORY_WT)
return __acpi_get_writethrough_mem_attribute();
return __pgprot(PROT_DEVICE_nGnRnE); return __pgprot(PROT_DEVICE_nGnRnE);
} }
@ -340,10 +352,10 @@ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
default: default:
if (region->attribute & EFI_MEMORY_WB) if (region->attribute & EFI_MEMORY_WB)
prot = PAGE_KERNEL; prot = PAGE_KERNEL;
else if (region->attribute & EFI_MEMORY_WT)
prot = __pgprot(PROT_NORMAL_WT);
else if (region->attribute & EFI_MEMORY_WC) else if (region->attribute & EFI_MEMORY_WC)
prot = __pgprot(PROT_NORMAL_NC); prot = __pgprot(PROT_NORMAL_NC);
else if (region->attribute & EFI_MEMORY_WT)
prot = __acpi_get_writethrough_mem_attribute();
} }
} }
return __ioremap(phys, size, prot); return __ioremap(phys, size, prot);

View File

@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
*/ */
if (!is_module) { if (!is_module) {
dsb(ish); dsb(ish);
__flush_icache_all(); icache_inval_all_pou();
isb(); isb();
/* Ignore ARM64_CB bit from feature mask */ /* Ignore ARM64_CB bit from feature mask */

View File

@ -27,6 +27,7 @@
int main(void) int main(void)
{ {
DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm)); DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
DEFINE(TSK_CPU, offsetof(struct task_struct, cpu));
BLANK(); BLANK();
DEFINE(TSK_TI_FLAGS, offsetof(struct task_struct, thread_info.flags)); DEFINE(TSK_TI_FLAGS, offsetof(struct task_struct, thread_info.flags));
DEFINE(TSK_TI_PREEMPT, offsetof(struct task_struct, thread_info.preempt_count)); DEFINE(TSK_TI_PREEMPT, offsetof(struct task_struct, thread_info.preempt_count));
@ -46,6 +47,8 @@ int main(void)
DEFINE(THREAD_SCTLR_USER, offsetof(struct task_struct, thread.sctlr_user)); DEFINE(THREAD_SCTLR_USER, offsetof(struct task_struct, thread.sctlr_user));
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user));
#endif
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel));
#endif #endif
#ifdef CONFIG_ARM64_MTE #ifdef CONFIG_ARM64_MTE
@ -99,7 +102,6 @@ int main(void)
DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT); DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending)); DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
BLANK(); BLANK();
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
BLANK(); BLANK();
DEFINE(FTR_OVR_VAL_OFFSET, offsetof(struct arm64_ftr_override, val)); DEFINE(FTR_OVR_VAL_OFFSET, offsetof(struct arm64_ftr_override, val));
@ -138,6 +140,15 @@ int main(void)
DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2)); DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id)); DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state)); DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
DEFINE(ARM_SMCCC_1_2_REGS_X0_OFFS, offsetof(struct arm_smccc_1_2_regs, a0));
DEFINE(ARM_SMCCC_1_2_REGS_X2_OFFS, offsetof(struct arm_smccc_1_2_regs, a2));
DEFINE(ARM_SMCCC_1_2_REGS_X4_OFFS, offsetof(struct arm_smccc_1_2_regs, a4));
DEFINE(ARM_SMCCC_1_2_REGS_X6_OFFS, offsetof(struct arm_smccc_1_2_regs, a6));
DEFINE(ARM_SMCCC_1_2_REGS_X8_OFFS, offsetof(struct arm_smccc_1_2_regs, a8));
DEFINE(ARM_SMCCC_1_2_REGS_X10_OFFS, offsetof(struct arm_smccc_1_2_regs, a10));
DEFINE(ARM_SMCCC_1_2_REGS_X12_OFFS, offsetof(struct arm_smccc_1_2_regs, a12));
DEFINE(ARM_SMCCC_1_2_REGS_X14_OFFS, offsetof(struct arm_smccc_1_2_regs, a14));
DEFINE(ARM_SMCCC_1_2_REGS_X16_OFFS, offsetof(struct arm_smccc_1_2_regs, a16));
BLANK(); BLANK();
DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address)); DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address));
DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address)); DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address));
@ -153,7 +164,9 @@ int main(void)
#endif #endif
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia));
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia)); DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia));
#endif
BLANK(); BLANK();
#endif #endif
return 0; return 0;

View File

@ -76,6 +76,7 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/insn.h>
#include <asm/kvm_host.h> #include <asm/kvm_host.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/mte.h> #include <asm/mte.h>
@ -107,6 +108,24 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
bool arm64_use_ng_mappings = false; bool arm64_use_ng_mappings = false;
EXPORT_SYMBOL(arm64_use_ng_mappings); EXPORT_SYMBOL(arm64_use_ng_mappings);
/*
* Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
* support it?
*/
static bool __read_mostly allow_mismatched_32bit_el0;
/*
* Static branch enabled only if allow_mismatched_32bit_el0 is set and we have
* seen at least one CPU capable of 32-bit EL0.
*/
DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
/*
* Mask of CPUs supporting 32-bit EL0.
* Only valid if arm64_mismatched_32bit_el0 is enabled.
*/
static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly;
/* /*
* Flag to indicate if we have computed the system wide * Flag to indicate if we have computed the system wide
* capabilities based on the boot time active CPUs. This * capabilities based on the boot time active CPUs. This
@ -400,6 +419,11 @@ static const struct arm64_ftr_bits ftr_dczid[] = {
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_gmid[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, SYS_GMID_EL1_BS_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_isar0[] = { static const struct arm64_ftr_bits ftr_id_isar0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
@ -617,6 +641,9 @@ static const struct __ftr_reg_entry {
/* Op1 = 0, CRn = 1, CRm = 2 */ /* Op1 = 0, CRn = 1, CRm = 2 */
ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr), ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr),
/* Op1 = 1, CRn = 0, CRm = 0 */
ARM64_FTR_REG(SYS_GMID_EL1, ftr_gmid),
/* Op1 = 3, CRn = 0, CRm = 0 */ /* Op1 = 3, CRn = 0, CRm = 0 */
{ SYS_CTR_EL0, &arm64_ftr_reg_ctrel0 }, { SYS_CTR_EL0, &arm64_ftr_reg_ctrel0 },
ARM64_FTR_REG(SYS_DCZID_EL0, ftr_dczid), ARM64_FTR_REG(SYS_DCZID_EL0, ftr_dczid),
@ -767,7 +794,7 @@ static void __init sort_ftr_regs(void)
* Any bits that are not covered by an arm64_ftr_bits entry are considered * Any bits that are not covered by an arm64_ftr_bits entry are considered
* RES0 for the system-wide value, and must strictly match. * RES0 for the system-wide value, and must strictly match.
*/ */
static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
{ {
u64 val = 0; u64 val = 0;
u64 strict_mask = ~0x0ULL; u64 strict_mask = ~0x0ULL;
@ -863,6 +890,31 @@ static void __init init_cpu_hwcaps_indirect_list(void)
static void __init setup_boot_cpu_capabilities(void); static void __init setup_boot_cpu_capabilities(void);
static void init_32bit_cpu_features(struct cpuinfo_32bit *info)
{
init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
}
void __init init_cpu_features(struct cpuinfo_arm64 *info) void __init init_cpu_features(struct cpuinfo_arm64 *info)
{ {
/* Before we start using the tables, make sure it is sorted */ /* Before we start using the tables, make sure it is sorted */
@ -882,35 +934,17 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1); init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0); init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0);
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); init_32bit_cpu_features(&info->aarch32);
init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
}
if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
sve_init_vq_map(); sve_init_vq_map();
} }
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid);
/* /*
* Initialize the indirect array of CPU hwcaps capabilities pointers * Initialize the indirect array of CPU hwcaps capabilities pointers
* before we handle the boot CPU below. * before we handle the boot CPU below.
@ -975,20 +1009,28 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field)
WARN_ON(!ftrp->width); WARN_ON(!ftrp->width);
} }
static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info, static void lazy_init_32bit_cpu_features(struct cpuinfo_arm64 *info,
struct cpuinfo_arm64 *boot) struct cpuinfo_arm64 *boot)
{
static bool boot_cpu_32bit_regs_overridden = false;
if (!allow_mismatched_32bit_el0 || boot_cpu_32bit_regs_overridden)
return;
if (id_aa64pfr0_32bit_el0(boot->reg_id_aa64pfr0))
return;
boot->aarch32 = info->aarch32;
init_32bit_cpu_features(&boot->aarch32);
boot_cpu_32bit_regs_overridden = true;
}
static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info,
struct cpuinfo_32bit *boot)
{ {
int taint = 0; int taint = 0;
u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
/*
* If we don't have AArch32 at all then skip the checks entirely
* as the register values may be UNKNOWN and we're not going to be
* using them for anything.
*/
if (!id_aa64pfr0_32bit_el0(pfr0))
return taint;
/* /*
* If we don't have AArch32 at EL1, then relax the strictness of * If we don't have AArch32 at EL1, then relax the strictness of
* EL1-dependent register fields to avoid spurious sanity check fails. * EL1-dependent register fields to avoid spurious sanity check fails.
@ -1135,10 +1177,29 @@ void update_cpu_features(int cpu,
} }
/* /*
* The kernel uses the LDGM/STGM instructions and the number of tags
* they read/write depends on the GMID_EL1.BS field. Check that the
* value is the same on all CPUs.
*/
if (IS_ENABLED(CONFIG_ARM64_MTE) &&
id_aa64pfr1_mte(info->reg_id_aa64pfr1)) {
taint |= check_update_ftr_reg(SYS_GMID_EL1, cpu,
info->reg_gmid, boot->reg_gmid);
}
/*
* If we don't have AArch32 at all then skip the checks entirely
* as the register values may be UNKNOWN and we're not going to be
* using them for anything.
*
* This relies on a sanitised view of the AArch64 ID registers * This relies on a sanitised view of the AArch64 ID registers
* (e.g. SYS_ID_AA64PFR0_EL1), so we call it last. * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
*/ */
taint |= update_32bit_cpu_features(cpu, info, boot); if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
lazy_init_32bit_cpu_features(info, boot);
taint |= update_32bit_cpu_features(cpu, &info->aarch32,
&boot->aarch32);
}
/* /*
* Mismatched CPU features are a recipe for disaster. Don't even * Mismatched CPU features are a recipe for disaster. Don't even
@ -1248,6 +1309,28 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
return feature_matches(val, entry); return feature_matches(val, entry);
} }
const struct cpumask *system_32bit_el0_cpumask(void)
{
if (!system_supports_32bit_el0())
return cpu_none_mask;
if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
return cpu_32bit_el0_mask;
return cpu_possible_mask;
}
static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
{
if (!has_cpuid_feature(entry, scope))
return allow_mismatched_32bit_el0;
if (scope == SCOPE_SYSTEM)
pr_info("detected: 32-bit EL0 Support\n");
return true;
}
static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope) static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
{ {
bool has_sre; bool has_sre;
@ -1866,10 +1949,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_copy_el2regs, .cpu_enable = cpu_copy_el2regs,
}, },
{ {
.desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
.capability = ARM64_HAS_32BIT_EL0,
.type = ARM64_CPUCAP_SYSTEM_FEATURE, .type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature, .matches = has_32bit_el0,
.sys_reg = SYS_ID_AA64PFR0_EL1, .sys_reg = SYS_ID_AA64PFR0_EL1,
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64PFR0_EL0_SHIFT, .field_pos = ID_AA64PFR0_EL0_SHIFT,
@ -2378,7 +2460,7 @@ static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = {
{}, {},
}; };
static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap) static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
{ {
switch (cap->hwcap_type) { switch (cap->hwcap_type) {
case CAP_HWCAP: case CAP_HWCAP:
@ -2423,7 +2505,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
return rc; return rc;
} }
static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
{ {
/* We support emulation of accesses to CPU ID feature registers */ /* We support emulation of accesses to CPU ID feature registers */
cpu_set_named_feature(CPUID); cpu_set_named_feature(CPUID);
@ -2598,7 +2680,7 @@ static void check_early_cpu_features(void)
} }
static void static void
verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps) __verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
{ {
for (; caps->matches; caps++) for (; caps->matches; caps++)
@ -2609,6 +2691,14 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
} }
} }
static void verify_local_elf_hwcaps(void)
{
__verify_local_elf_hwcaps(arm64_elf_hwcaps);
if (id_aa64pfr0_32bit_el0(read_cpuid(ID_AA64PFR0_EL1)))
__verify_local_elf_hwcaps(compat_elf_hwcaps);
}
static void verify_sve_features(void) static void verify_sve_features(void)
{ {
u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1);
@ -2673,11 +2763,7 @@ static void verify_local_cpu_capabilities(void)
* on all secondary CPUs. * on all secondary CPUs.
*/ */
verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU); verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
verify_local_elf_hwcaps();
verify_local_elf_hwcaps(arm64_elf_hwcaps);
if (system_supports_32bit_el0())
verify_local_elf_hwcaps(compat_elf_hwcaps);
if (system_supports_sve()) if (system_supports_sve())
verify_sve_features(); verify_sve_features();
@ -2812,6 +2898,34 @@ void __init setup_cpu_features(void)
ARCH_DMA_MINALIGN); ARCH_DMA_MINALIGN);
} }
static int enable_mismatched_32bit_el0(unsigned int cpu)
{
struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu);
bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0);
if (cpu_32bit) {
cpumask_set_cpu(cpu, cpu_32bit_el0_mask);
static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0);
setup_elf_hwcaps(compat_elf_hwcaps);
}
return 0;
}
static int __init init_32bit_el0_mask(void)
{
if (!allow_mismatched_32bit_el0)
return 0;
if (!zalloc_cpumask_var(&cpu_32bit_el0_mask, GFP_KERNEL))
return -ENOMEM;
return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
"arm64/mismatched_32bit_el0:online",
enable_mismatched_32bit_el0, NULL);
}
subsys_initcall_sync(init_32bit_el0_mask);
static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
{ {
cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
@ -2905,8 +3019,8 @@ static int emulate_mrs(struct pt_regs *regs, u32 insn)
} }
static struct undef_hook mrs_hook = { static struct undef_hook mrs_hook = {
.instr_mask = 0xfff00000, .instr_mask = 0xffff0000,
.instr_val = 0xd5300000, .instr_val = 0xd5380000,
.pstate_mask = PSR_AA32_MODE_MASK, .pstate_mask = PSR_AA32_MODE_MASK,
.pstate_val = PSR_MODE_EL0t, .pstate_val = PSR_MODE_EL0t,
.fn = emulate_mrs, .fn = emulate_mrs,

View File

@ -246,7 +246,7 @@ static struct kobj_type cpuregs_kobj_type = {
struct cpuinfo_arm64 *info = kobj_to_cpuinfo(kobj); \ struct cpuinfo_arm64 *info = kobj_to_cpuinfo(kobj); \
\ \
if (info->reg_midr) \ if (info->reg_midr) \
return sprintf(buf, "0x%016x\n", info->reg_##_field); \ return sprintf(buf, "0x%016llx\n", info->reg_##_field); \
else \ else \
return 0; \ return 0; \
} \ } \
@ -344,6 +344,32 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu); pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
} }
static void __cpuinfo_store_cpu_32bit(struct cpuinfo_32bit *info)
{
info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
}
static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
{ {
info->reg_cntfrq = arch_timer_get_cntfrq(); info->reg_cntfrq = arch_timer_get_cntfrq();
@ -371,31 +397,11 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1); info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1);
/* Update the 32bit ID registers only if AArch32 is implemented */ if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { info->reg_gmid = read_cpuid(GMID_EL1);
info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
info->reg_mvfr0 = read_cpuid(MVFR0_EL1); if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
info->reg_mvfr1 = read_cpuid(MVFR1_EL1); __cpuinfo_store_cpu_32bit(&info->aarch32);
info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
}
if (IS_ENABLED(CONFIG_ARM64_SVE) && if (IS_ENABLED(CONFIG_ARM64_SVE) &&
id_aa64pfr0_sve(info->reg_id_aa64pfr0)) id_aa64pfr0_sve(info->reg_id_aa64pfr0))

View File

@ -28,7 +28,8 @@ SYM_CODE_START(efi_enter_kernel)
* stale icache entries from before relocation. * stale icache entries from before relocation.
*/ */
ldr w1, =kernel_size ldr w1, =kernel_size
bl __clean_dcache_area_poc add x1, x0, x1
bl dcache_clean_poc
ic ialluis ic ialluis
/* /*
@ -36,8 +37,8 @@ SYM_CODE_START(efi_enter_kernel)
* so that we can safely disable the MMU and caches. * so that we can safely disable the MMU and caches.
*/ */
adr x0, 0f adr x0, 0f
ldr w1, 3f adr x1, 3f
bl __clean_dcache_area_poc bl dcache_clean_poc
0: 0:
/* Turn off Dcache and MMU */ /* Turn off Dcache and MMU */
mrs x0, CurrentEL mrs x0, CurrentEL
@ -64,5 +65,5 @@ SYM_CODE_START(efi_enter_kernel)
mov x2, xzr mov x2, xzr
mov x3, xzr mov x3, xzr
br x19 br x19
3:
SYM_CODE_END(efi_enter_kernel) SYM_CODE_END(efi_enter_kernel)
3: .long . - 0b

View File

@ -6,7 +6,11 @@
*/ */
#include <linux/context_tracking.h> #include <linux/context_tracking.h>
#include <linux/linkage.h>
#include <linux/lockdep.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
@ -15,7 +19,11 @@
#include <asm/exception.h> #include <asm/exception.h>
#include <asm/kprobes.h> #include <asm/kprobes.h>
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/processor.h>
#include <asm/sdei.h>
#include <asm/stacktrace.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#include <asm/system_misc.h>
/* /*
* This is intended to match the logic in irqentry_enter(), handling the kernel * This is intended to match the logic in irqentry_enter(), handling the kernel
@ -67,7 +75,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
} }
} }
void noinstr arm64_enter_nmi(struct pt_regs *regs) static void noinstr arm64_enter_nmi(struct pt_regs *regs)
{ {
regs->lockdep_hardirqs = lockdep_hardirqs_enabled(); regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
@ -80,7 +88,7 @@ void noinstr arm64_enter_nmi(struct pt_regs *regs)
ftrace_nmi_enter(); ftrace_nmi_enter();
} }
void noinstr arm64_exit_nmi(struct pt_regs *regs) static void noinstr arm64_exit_nmi(struct pt_regs *regs)
{ {
bool restore = regs->lockdep_hardirqs; bool restore = regs->lockdep_hardirqs;
@ -97,7 +105,7 @@ void noinstr arm64_exit_nmi(struct pt_regs *regs)
__nmi_exit(); __nmi_exit();
} }
asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs) static void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
{ {
if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
arm64_enter_nmi(regs); arm64_enter_nmi(regs);
@ -105,7 +113,7 @@ asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
enter_from_kernel_mode(regs); enter_from_kernel_mode(regs);
} }
asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs) static void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
{ {
if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
arm64_exit_nmi(regs); arm64_exit_nmi(regs);
@ -113,6 +121,65 @@ asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
exit_to_kernel_mode(regs); exit_to_kernel_mode(regs);
} }
static void __sched arm64_preempt_schedule_irq(void)
{
lockdep_assert_irqs_disabled();
/*
* DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
* priority masking is used the GIC irqchip driver will clear DAIF.IF
* using gic_arch_enable_irqs() for normal IRQs. If anything is set in
* DAIF we must have handled an NMI, so skip preemption.
*/
if (system_uses_irq_prio_masking() && read_sysreg(daif))
return;
/*
* Preempting a task from an IRQ means we leave copies of PSTATE
* on the stack. cpufeature's enable calls may modify PSTATE, but
* resuming one of these preempted tasks would undo those changes.
*
* Only allow a task to be preempted once cpufeatures have been
* enabled.
*/
if (system_capabilities_finalized())
preempt_schedule_irq();
}
static void do_interrupt_handler(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
if (on_thread_stack())
call_on_irq_stack(regs, handler);
else
handler(regs);
}
extern void (*handle_arch_irq)(struct pt_regs *);
extern void (*handle_arch_fiq)(struct pt_regs *);
static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
unsigned int esr)
{
arm64_enter_nmi(regs);
console_verbose();
pr_crit("Unhandled %s exception on CPU%d, ESR 0x%08x -- %s\n",
vector, smp_processor_id(), esr,
esr_get_class_string(esr));
__show_regs(regs);
panic("Unhandled exception");
}
#define UNHANDLED(el, regsize, vector) \
asmlinkage void noinstr el##_##regsize##_##vector##_handler(struct pt_regs *regs) \
{ \
const char *desc = #regsize "-bit " #el " " #vector; \
__panic_unhandled(regs, desc, read_sysreg(esr_el1)); \
}
#ifdef CONFIG_ARM64_ERRATUM_1463225 #ifdef CONFIG_ARM64_ERRATUM_1463225
static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa); static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
@ -162,6 +229,11 @@ static bool cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
} }
#endif /* CONFIG_ARM64_ERRATUM_1463225 */ #endif /* CONFIG_ARM64_ERRATUM_1463225 */
UNHANDLED(el1t, 64, sync)
UNHANDLED(el1t, 64, irq)
UNHANDLED(el1t, 64, fiq)
UNHANDLED(el1t, 64, error)
static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr) static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
{ {
unsigned long far = read_sysreg(far_el1); unsigned long far = read_sysreg(far_el1);
@ -193,15 +265,6 @@ static void noinstr el1_undef(struct pt_regs *regs)
exit_to_kernel_mode(regs); exit_to_kernel_mode(regs);
} }
static void noinstr el1_inv(struct pt_regs *regs, unsigned long esr)
{
enter_from_kernel_mode(regs);
local_daif_inherit(regs);
bad_mode(regs, 0, esr);
local_daif_mask();
exit_to_kernel_mode(regs);
}
static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs)
{ {
regs->lockdep_hardirqs = lockdep_hardirqs_enabled(); regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
@ -245,7 +308,7 @@ static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
exit_to_kernel_mode(regs); exit_to_kernel_mode(regs);
} }
asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs) asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
@ -275,10 +338,50 @@ asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs)
el1_fpac(regs, esr); el1_fpac(regs, esr);
break; break;
default: default:
el1_inv(regs, esr); __panic_unhandled(regs, "64-bit el1h sync", esr);
} }
} }
static void noinstr el1_interrupt(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
enter_el1_irq_or_nmi(regs);
do_interrupt_handler(regs, handler);
/*
* Note: thread_info::preempt_count includes both thread_info::count
* and thread_info::need_resched, and is not equivalent to
* preempt_count().
*/
if (IS_ENABLED(CONFIG_PREEMPTION) &&
READ_ONCE(current_thread_info()->preempt_count) == 0)
arm64_preempt_schedule_irq();
exit_el1_irq_or_nmi(regs);
}
asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
{
el1_interrupt(regs, handle_arch_irq);
}
asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
{
el1_interrupt(regs, handle_arch_fiq);
}
asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
{
unsigned long esr = read_sysreg(esr_el1);
local_daif_restore(DAIF_ERRCTX);
arm64_enter_nmi(regs);
do_serror(regs, esr);
arm64_exit_nmi(regs);
}
asmlinkage void noinstr enter_from_user_mode(void) asmlinkage void noinstr enter_from_user_mode(void)
{ {
lockdep_hardirqs_off(CALLER_ADDR0); lockdep_hardirqs_off(CALLER_ADDR0);
@ -398,7 +501,7 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
enter_from_user_mode(); enter_from_user_mode();
do_debug_exception(far, esr, regs); do_debug_exception(far, esr, regs);
local_daif_restore(DAIF_PROCCTX_NOIRQ); local_daif_restore(DAIF_PROCCTX);
} }
static void noinstr el0_svc(struct pt_regs *regs) static void noinstr el0_svc(struct pt_regs *regs)
@ -415,7 +518,7 @@ static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
do_ptrauth_fault(regs, esr); do_ptrauth_fault(regs, esr);
} }
asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs) asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
@ -468,6 +571,56 @@ asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
} }
} }
static void noinstr el0_interrupt(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
enter_from_user_mode();
write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
if (regs->pc & BIT(55))
arm64_apply_bp_hardening();
do_interrupt_handler(regs, handler);
}
static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
{
el0_interrupt(regs, handle_arch_irq);
}
asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
{
__el0_irq_handler_common(regs);
}
static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
{
el0_interrupt(regs, handle_arch_fiq);
}
asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
{
__el0_fiq_handler_common(regs);
}
static void __el0_error_handler_common(struct pt_regs *regs)
{
unsigned long esr = read_sysreg(esr_el1);
enter_from_user_mode();
local_daif_restore(DAIF_ERRCTX);
arm64_enter_nmi(regs);
do_serror(regs, esr);
arm64_exit_nmi(regs);
local_daif_restore(DAIF_PROCCTX);
}
asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
{
__el0_error_handler_common(regs);
}
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
{ {
@ -483,7 +636,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs)
do_el0_svc_compat(regs); do_el0_svc_compat(regs);
} }
asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs) asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
@ -526,4 +679,71 @@ asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
el0_inv(regs, esr); el0_inv(regs, esr);
} }
} }
asmlinkage void noinstr el0t_32_irq_handler(struct pt_regs *regs)
{
__el0_irq_handler_common(regs);
}
asmlinkage void noinstr el0t_32_fiq_handler(struct pt_regs *regs)
{
__el0_fiq_handler_common(regs);
}
asmlinkage void noinstr el0t_32_error_handler(struct pt_regs *regs)
{
__el0_error_handler_common(regs);
}
#else /* CONFIG_COMPAT */
UNHANDLED(el0t, 32, sync)
UNHANDLED(el0t, 32, irq)
UNHANDLED(el0t, 32, fiq)
UNHANDLED(el0t, 32, error)
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
#ifdef CONFIG_VMAP_STACK
asmlinkage void noinstr handle_bad_stack(struct pt_regs *regs)
{
unsigned int esr = read_sysreg(esr_el1);
unsigned long far = read_sysreg(far_el1);
arm64_enter_nmi(regs);
panic_bad_stack(regs, esr, far);
}
#endif /* CONFIG_VMAP_STACK */
#ifdef CONFIG_ARM_SDE_INTERFACE
asmlinkage noinstr unsigned long
__sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
{
unsigned long ret;
/*
* We didn't take an exception to get here, so the HW hasn't
* set/cleared bits in PSTATE that we may rely on.
*
* The original SDEI spec (ARM DEN 0054A) can be read ambiguously as to
* whether PSTATE bits are inherited unchanged or generated from
* scratch, and the TF-A implementation always clears PAN and always
* clears UAO. There are no other known implementations.
*
* Subsequent revisions (ARM DEN 0054B) follow the usual rules for how
* PSTATE is modified upon architectural exceptions, and so PAN is
* either inherited or set per SCTLR_ELx.SPAN, and UAO is always
* cleared.
*
* We must explicitly reset PAN to the expected state, including
* clearing it when the host isn't using it, in case a VM had it set.
*/
if (system_uses_hw_pan())
set_pstate_pan(1);
else if (cpu_has_pan())
set_pstate_pan(0);
arm64_enter_nmi(regs);
ret = do_sdei_event(regs, arg);
arm64_exit_nmi(regs);
return ret;
}
#endif /* CONFIG_ARM_SDE_INTERFACE */

View File

@ -63,16 +63,24 @@ SYM_FUNC_END(sve_set_vq)
* and the rest zeroed. All the other SVE registers will be zeroed. * and the rest zeroed. All the other SVE registers will be zeroed.
*/ */
SYM_FUNC_START(sve_load_from_fpsimd_state) SYM_FUNC_START(sve_load_from_fpsimd_state)
sve_load_vq x1, x2, x3 sve_load_vq x1, x2, x3
fpsimd_restore x0, 8 fpsimd_restore x0, 8
_for n, 0, 15, _sve_pfalse \n sve_flush_p_ffr
_sve_wrffr 0 ret
ret
SYM_FUNC_END(sve_load_from_fpsimd_state) SYM_FUNC_END(sve_load_from_fpsimd_state)
/* Zero all SVE registers but the first 128-bits of each vector */ /*
* Zero all SVE registers but the first 128-bits of each vector
*
* VQ must already be configured by caller, any further updates of VQ
* will need to ensure that the register state remains valid.
*
* x0 = VQ - 1
*/
SYM_FUNC_START(sve_flush_live) SYM_FUNC_START(sve_flush_live)
sve_flush cbz x0, 1f // A VQ-1 of 0 is 128 bits so no extra Z state
sve_flush_z
1: sve_flush_p_ffr
ret ret
SYM_FUNC_END(sve_flush_live) SYM_FUNC_END(sve_flush_live)

View File

@ -33,12 +33,6 @@
* Context tracking and irqflag tracing need to instrument transitions between * Context tracking and irqflag tracing need to instrument transitions between
* user and kernel mode. * user and kernel mode.
*/ */
.macro user_exit_irqoff
#if defined(CONFIG_CONTEXT_TRACKING) || defined(CONFIG_TRACE_IRQFLAGS)
bl enter_from_user_mode
#endif
.endm
.macro user_enter_irqoff .macro user_enter_irqoff
#if defined(CONFIG_CONTEXT_TRACKING) || defined(CONFIG_TRACE_IRQFLAGS) #if defined(CONFIG_CONTEXT_TRACKING) || defined(CONFIG_TRACE_IRQFLAGS)
bl exit_to_user_mode bl exit_to_user_mode
@ -51,16 +45,7 @@
.endr .endr
.endm .endm
/* .macro kernel_ventry, el:req, ht:req, regsize:req, label:req
* Bad Abort numbers
*-----------------
*/
#define BAD_SYNC 0
#define BAD_IRQ 1
#define BAD_FIQ 2
#define BAD_ERROR 3
.macro kernel_ventry, el, label, regsize = 64
.align 7 .align 7
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
.if \el == 0 .if \el == 0
@ -87,7 +72,7 @@ alternative_else_nop_endif
tbnz x0, #THREAD_SHIFT, 0f tbnz x0, #THREAD_SHIFT, 0f
sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp
b el\()\el\()_\label b el\el\ht\()_\regsize\()_\label
0: 0:
/* /*
@ -119,7 +104,7 @@ alternative_else_nop_endif
sub sp, sp, x0 sub sp, sp, x0
mrs x0, tpidrro_el0 mrs x0, tpidrro_el0
#endif #endif
b el\()\el\()_\label b el\el\ht\()_\regsize\()_\label
.endm .endm
.macro tramp_alias, dst, sym .macro tramp_alias, dst, sym
@ -275,7 +260,7 @@ alternative_else_nop_endif
mte_set_kernel_gcr x22, x23 mte_set_kernel_gcr x22, x23
scs_load tsk, x20 scs_load tsk
.else .else
add x21, sp, #PT_REGS_SIZE add x21, sp, #PT_REGS_SIZE
get_current_task tsk get_current_task tsk
@ -285,7 +270,7 @@ alternative_else_nop_endif
stp lr, x21, [sp, #S_LR] stp lr, x21, [sp, #S_LR]
/* /*
* For exceptions from EL0, create a terminal frame record. * For exceptions from EL0, create a final frame record.
* For exceptions from EL1, create a synthetic frame record so the * For exceptions from EL1, create a synthetic frame record so the
* interrupted code shows up in the backtrace. * interrupted code shows up in the backtrace.
*/ */
@ -375,7 +360,7 @@ alternative_if ARM64_WORKAROUND_845719
alternative_else_nop_endif alternative_else_nop_endif
#endif #endif
3: 3:
scs_save tsk, x0 scs_save tsk
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
alternative_if ARM64_HAS_ADDRESS_AUTH alternative_if ARM64_HAS_ADDRESS_AUTH
@ -486,63 +471,12 @@ SYM_CODE_START_LOCAL(__swpan_exit_el0)
SYM_CODE_END(__swpan_exit_el0) SYM_CODE_END(__swpan_exit_el0)
#endif #endif
.macro irq_stack_entry
mov x19, sp // preserve the original sp
#ifdef CONFIG_SHADOW_CALL_STACK
mov x24, scs_sp // preserve the original shadow stack
#endif
/*
* Compare sp with the base of the task stack.
* If the top ~(THREAD_SIZE - 1) bits match, we are on a task stack,
* and should switch to the irq stack.
*/
ldr x25, [tsk, TSK_STACK]
eor x25, x25, x19
and x25, x25, #~(THREAD_SIZE - 1)
cbnz x25, 9998f
ldr_this_cpu x25, irq_stack_ptr, x26
mov x26, #IRQ_STACK_SIZE
add x26, x25, x26
/* switch to the irq stack */
mov sp, x26
#ifdef CONFIG_SHADOW_CALL_STACK
/* also switch to the irq shadow stack */
ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26
#endif
9998:
.endm
/*
* The callee-saved regs (x19-x29) should be preserved between
* irq_stack_entry and irq_stack_exit, but note that kernel_entry
* uses x20-x23 to store data for later use.
*/
.macro irq_stack_exit
mov sp, x19
#ifdef CONFIG_SHADOW_CALL_STACK
mov scs_sp, x24
#endif
.endm
/* GPRs used by entry code */ /* GPRs used by entry code */
tsk .req x28 // current thread_info tsk .req x28 // current thread_info
/* /*
* Interrupt handling. * Interrupt handling.
*/ */
.macro irq_handler, handler:req
ldr_l x1, \handler
mov x0, sp
irq_stack_entry
blr x1
irq_stack_exit
.endm
.macro gic_prio_kentry_setup, tmp:req .macro gic_prio_kentry_setup, tmp:req
#ifdef CONFIG_ARM64_PSEUDO_NMI #ifdef CONFIG_ARM64_PSEUDO_NMI
alternative_if ARM64_HAS_IRQ_PRIO_MASKING alternative_if ARM64_HAS_IRQ_PRIO_MASKING
@ -552,45 +486,6 @@ tsk .req x28 // current thread_info
#endif #endif
.endm .endm
.macro el1_interrupt_handler, handler:req
enable_da
mov x0, sp
bl enter_el1_irq_or_nmi
irq_handler \handler
#ifdef CONFIG_PREEMPTION
ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
/*
* DA were cleared at start of handling, and IF are cleared by
* the GIC irqchip driver using gic_arch_enable_irqs() for
* normal IRQs. If anything is set, it means we come back from
* an NMI instead of a normal IRQ, so skip preemption
*/
mrs x0, daif
orr x24, x24, x0
alternative_else_nop_endif
cbnz x24, 1f // preempt count != 0 || NMI return path
bl arm64_preempt_schedule_irq // irq en/disable is done inside
1:
#endif
mov x0, sp
bl exit_el1_irq_or_nmi
.endm
.macro el0_interrupt_handler, handler:req
user_exit_irqoff
enable_da
tbz x22, #55, 1f
bl do_el0_irq_bp_hardening
1:
irq_handler \handler
.endm
.text .text
/* /*
@ -600,32 +495,25 @@ alternative_else_nop_endif
.align 11 .align 11
SYM_CODE_START(vectors) SYM_CODE_START(vectors)
kernel_ventry 1, sync_invalid // Synchronous EL1t kernel_ventry 1, t, 64, sync // Synchronous EL1t
kernel_ventry 1, irq_invalid // IRQ EL1t kernel_ventry 1, t, 64, irq // IRQ EL1t
kernel_ventry 1, fiq_invalid // FIQ EL1t kernel_ventry 1, t, 64, fiq // FIQ EL1h
kernel_ventry 1, error_invalid // Error EL1t kernel_ventry 1, t, 64, error // Error EL1t
kernel_ventry 1, sync // Synchronous EL1h kernel_ventry 1, h, 64, sync // Synchronous EL1h
kernel_ventry 1, irq // IRQ EL1h kernel_ventry 1, h, 64, irq // IRQ EL1h
kernel_ventry 1, fiq // FIQ EL1h kernel_ventry 1, h, 64, fiq // FIQ EL1h
kernel_ventry 1, error // Error EL1h kernel_ventry 1, h, 64, error // Error EL1h
kernel_ventry 0, sync // Synchronous 64-bit EL0 kernel_ventry 0, t, 64, sync // Synchronous 64-bit EL0
kernel_ventry 0, irq // IRQ 64-bit EL0 kernel_ventry 0, t, 64, irq // IRQ 64-bit EL0
kernel_ventry 0, fiq // FIQ 64-bit EL0 kernel_ventry 0, t, 64, fiq // FIQ 64-bit EL0
kernel_ventry 0, error // Error 64-bit EL0 kernel_ventry 0, t, 64, error // Error 64-bit EL0
#ifdef CONFIG_COMPAT kernel_ventry 0, t, 32, sync // Synchronous 32-bit EL0
kernel_ventry 0, sync_compat, 32 // Synchronous 32-bit EL0 kernel_ventry 0, t, 32, irq // IRQ 32-bit EL0
kernel_ventry 0, irq_compat, 32 // IRQ 32-bit EL0 kernel_ventry 0, t, 32, fiq // FIQ 32-bit EL0
kernel_ventry 0, fiq_compat, 32 // FIQ 32-bit EL0 kernel_ventry 0, t, 32, error // Error 32-bit EL0
kernel_ventry 0, error_compat, 32 // Error 32-bit EL0
#else
kernel_ventry 0, sync_invalid, 32 // Synchronous 32-bit EL0
kernel_ventry 0, irq_invalid, 32 // IRQ 32-bit EL0
kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0
kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0
#endif
SYM_CODE_END(vectors) SYM_CODE_END(vectors)
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
@ -656,147 +544,46 @@ __bad_stack:
ASM_BUG() ASM_BUG()
#endif /* CONFIG_VMAP_STACK */ #endif /* CONFIG_VMAP_STACK */
/*
* Invalid mode handlers .macro entry_handler el:req, ht:req, regsize:req, label:req
*/ SYM_CODE_START_LOCAL(el\el\ht\()_\regsize\()_\label)
.macro inv_entry, el, reason, regsize = 64
kernel_entry \el, \regsize kernel_entry \el, \regsize
mov x0, sp mov x0, sp
mov x1, #\reason bl el\el\ht\()_\regsize\()_\label\()_handler
mrs x2, esr_el1 .if \el == 0
bl bad_mode b ret_to_user
ASM_BUG() .else
b ret_to_kernel
.endif
SYM_CODE_END(el\el\ht\()_\regsize\()_\label)
.endm .endm
SYM_CODE_START_LOCAL(el0_sync_invalid)
inv_entry 0, BAD_SYNC
SYM_CODE_END(el0_sync_invalid)
SYM_CODE_START_LOCAL(el0_irq_invalid)
inv_entry 0, BAD_IRQ
SYM_CODE_END(el0_irq_invalid)
SYM_CODE_START_LOCAL(el0_fiq_invalid)
inv_entry 0, BAD_FIQ
SYM_CODE_END(el0_fiq_invalid)
SYM_CODE_START_LOCAL(el0_error_invalid)
inv_entry 0, BAD_ERROR
SYM_CODE_END(el0_error_invalid)
SYM_CODE_START_LOCAL(el1_sync_invalid)
inv_entry 1, BAD_SYNC
SYM_CODE_END(el1_sync_invalid)
SYM_CODE_START_LOCAL(el1_irq_invalid)
inv_entry 1, BAD_IRQ
SYM_CODE_END(el1_irq_invalid)
SYM_CODE_START_LOCAL(el1_fiq_invalid)
inv_entry 1, BAD_FIQ
SYM_CODE_END(el1_fiq_invalid)
SYM_CODE_START_LOCAL(el1_error_invalid)
inv_entry 1, BAD_ERROR
SYM_CODE_END(el1_error_invalid)
/* /*
* EL1 mode handlers. * Early exception handlers
*/ */
.align 6 entry_handler 1, t, 64, sync
SYM_CODE_START_LOCAL_NOALIGN(el1_sync) entry_handler 1, t, 64, irq
kernel_entry 1 entry_handler 1, t, 64, fiq
mov x0, sp entry_handler 1, t, 64, error
bl el1_sync_handler
entry_handler 1, h, 64, sync
entry_handler 1, h, 64, irq
entry_handler 1, h, 64, fiq
entry_handler 1, h, 64, error
entry_handler 0, t, 64, sync
entry_handler 0, t, 64, irq
entry_handler 0, t, 64, fiq
entry_handler 0, t, 64, error
entry_handler 0, t, 32, sync
entry_handler 0, t, 32, irq
entry_handler 0, t, 32, fiq
entry_handler 0, t, 32, error
SYM_CODE_START_LOCAL(ret_to_kernel)
kernel_exit 1 kernel_exit 1
SYM_CODE_END(el1_sync) SYM_CODE_END(ret_to_kernel)
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
kernel_entry 1
el1_interrupt_handler handle_arch_irq
kernel_exit 1
SYM_CODE_END(el1_irq)
SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
kernel_entry 1
el1_interrupt_handler handle_arch_fiq
kernel_exit 1
SYM_CODE_END(el1_fiq)
/*
* EL0 mode handlers.
*/
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el0_sync)
kernel_entry 0
mov x0, sp
bl el0_sync_handler
b ret_to_user
SYM_CODE_END(el0_sync)
#ifdef CONFIG_COMPAT
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat)
kernel_entry 0, 32
mov x0, sp
bl el0_sync_compat_handler
b ret_to_user
SYM_CODE_END(el0_sync_compat)
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat)
kernel_entry 0, 32
b el0_irq_naked
SYM_CODE_END(el0_irq_compat)
SYM_CODE_START_LOCAL_NOALIGN(el0_fiq_compat)
kernel_entry 0, 32
b el0_fiq_naked
SYM_CODE_END(el0_fiq_compat)
SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
kernel_entry 0, 32
b el0_error_naked
SYM_CODE_END(el0_error_compat)
#endif
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
kernel_entry 0
el0_irq_naked:
el0_interrupt_handler handle_arch_irq
b ret_to_user
SYM_CODE_END(el0_irq)
SYM_CODE_START_LOCAL_NOALIGN(el0_fiq)
kernel_entry 0
el0_fiq_naked:
el0_interrupt_handler handle_arch_fiq
b ret_to_user
SYM_CODE_END(el0_fiq)
SYM_CODE_START_LOCAL(el1_error)
kernel_entry 1
mrs x1, esr_el1
enable_dbg
mov x0, sp
bl do_serror
kernel_exit 1
SYM_CODE_END(el1_error)
SYM_CODE_START_LOCAL(el0_error)
kernel_entry 0
el0_error_naked:
mrs x25, esr_el1
user_exit_irqoff
enable_dbg
mov x0, sp
mov x1, x25
bl do_serror
enable_da
b ret_to_user
SYM_CODE_END(el0_error)
/* /*
* "slow" syscall return path. * "slow" syscall return path.
@ -979,8 +766,8 @@ SYM_FUNC_START(cpu_switch_to)
mov sp, x9 mov sp, x9
msr sp_el0, x1 msr sp_el0, x1
ptrauth_keys_install_kernel x1, x8, x9, x10 ptrauth_keys_install_kernel x1, x8, x9, x10
scs_save x0, x8 scs_save x0
scs_load x1, x8 scs_load x1
ret ret
SYM_FUNC_END(cpu_switch_to) SYM_FUNC_END(cpu_switch_to)
NOKPROBE(cpu_switch_to) NOKPROBE(cpu_switch_to)
@ -998,6 +785,42 @@ SYM_CODE_START(ret_from_fork)
SYM_CODE_END(ret_from_fork) SYM_CODE_END(ret_from_fork)
NOKPROBE(ret_from_fork) NOKPROBE(ret_from_fork)
/*
* void call_on_irq_stack(struct pt_regs *regs,
* void (*func)(struct pt_regs *));
*
* Calls func(regs) using this CPU's irq stack and shadow irq stack.
*/
SYM_FUNC_START(call_on_irq_stack)
#ifdef CONFIG_SHADOW_CALL_STACK
stp scs_sp, xzr, [sp, #-16]!
ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x17
#endif
/* Create a frame record to save our LR and SP (implicit in FP) */
stp x29, x30, [sp, #-16]!
mov x29, sp
ldr_this_cpu x16, irq_stack_ptr, x17
mov x15, #IRQ_STACK_SIZE
add x16, x16, x15
/* Move to the new stack and call the function there */
mov sp, x16
blr x1
/*
* Restore the SP from the FP, and restore the FP and LR from the frame
* record.
*/
mov sp, x29
ldp x29, x30, [sp], #16
#ifdef CONFIG_SHADOW_CALL_STACK
ldp scs_sp, xzr, [sp], #16
#endif
ret
SYM_FUNC_END(call_on_irq_stack)
NOKPROBE(call_on_irq_stack)
#ifdef CONFIG_ARM_SDE_INTERFACE #ifdef CONFIG_ARM_SDE_INTERFACE
#include <asm/sdei.h> #include <asm/sdei.h>

View File

@ -957,8 +957,10 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
* disabling the trap, otherwise update our in-memory copy. * disabling the trap, otherwise update our in-memory copy.
*/ */
if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) {
sve_set_vq(sve_vq_from_vl(current->thread.sve_vl) - 1); unsigned long vq_minus_one =
sve_flush_live(); sve_vq_from_vl(current->thread.sve_vl) - 1;
sve_set_vq(vq_minus_one);
sve_flush_live(vq_minus_one);
fpsimd_bind_task_to_cpu(); fpsimd_bind_task_to_cpu();
} else { } else {
fpsimd_to_sve(current); fpsimd_to_sve(current);

View File

@ -15,6 +15,7 @@
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/patching.h>
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
/* /*

View File

@ -16,6 +16,7 @@
#include <asm/asm_pointer_auth.h> #include <asm/asm_pointer_auth.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/bug.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/cache.h> #include <asm/cache.h>
@ -117,8 +118,8 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
dmb sy // needed before dc ivac with dmb sy // needed before dc ivac with
// MMU off // MMU off
mov x1, #0x20 // 4 x 8 bytes add x1, x0, #0x20 // 4 x 8 bytes
b __inval_dcache_area // tail call b dcache_inval_poc // tail call
SYM_CODE_END(preserve_boot_args) SYM_CODE_END(preserve_boot_args)
/* /*
@ -195,7 +196,7 @@ SYM_CODE_END(preserve_boot_args)
and \iend, \iend, \istart // iend = (vend >> shift) & (ptrs - 1) and \iend, \iend, \istart // iend = (vend >> shift) & (ptrs - 1)
mov \istart, \ptrs mov \istart, \ptrs
mul \istart, \istart, \count mul \istart, \istart, \count
add \iend, \iend, \istart // iend += (count - 1) * ptrs add \iend, \iend, \istart // iend += count * ptrs
// our entries span multiple tables // our entries span multiple tables
lsr \istart, \vstart, \shift lsr \istart, \vstart, \shift
@ -268,8 +269,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
*/ */
adrp x0, init_pg_dir adrp x0, init_pg_dir
adrp x1, init_pg_end adrp x1, init_pg_end
sub x1, x1, x0 bl dcache_inval_poc
bl __inval_dcache_area
/* /*
* Clear the init page tables. * Clear the init page tables.
@ -354,7 +354,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
#endif #endif
1: 1:
ldr_l x4, idmap_ptrs_per_pgd ldr_l x4, idmap_ptrs_per_pgd
mov x5, x3 // __pa(__idmap_text_start)
adr_l x6, __idmap_text_end // __pa(__idmap_text_end) adr_l x6, __idmap_text_end // __pa(__idmap_text_end)
map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14 map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14
@ -382,39 +381,57 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
adrp x0, idmap_pg_dir adrp x0, idmap_pg_dir
adrp x1, idmap_pg_end adrp x1, idmap_pg_end
sub x1, x1, x0 bl dcache_inval_poc
bl __inval_dcache_area
adrp x0, init_pg_dir adrp x0, init_pg_dir
adrp x1, init_pg_end adrp x1, init_pg_end
sub x1, x1, x0 bl dcache_inval_poc
bl __inval_dcache_area
ret x28 ret x28
SYM_FUNC_END(__create_page_tables) SYM_FUNC_END(__create_page_tables)
/*
* Initialize CPU registers with task-specific and cpu-specific context.
*
* Create a final frame record at task_pt_regs(current)->stackframe, so
* that the unwinder can identify the final frame record of any task by
* its location in the task stack. We reserve the entire pt_regs space
* for consistency with user tasks and kthreads.
*/
.macro init_cpu_task tsk, tmp1, tmp2
msr sp_el0, \tsk
ldr \tmp1, [\tsk, #TSK_STACK]
add sp, \tmp1, #THREAD_SIZE
sub sp, sp, #PT_REGS_SIZE
stp xzr, xzr, [sp, #S_STACKFRAME]
add x29, sp, #S_STACKFRAME
scs_load \tsk
adr_l \tmp1, __per_cpu_offset
ldr w\tmp2, [\tsk, #TSK_CPU]
ldr \tmp1, [\tmp1, \tmp2, lsl #3]
set_this_cpu_offset \tmp1
.endm
/* /*
* The following fragment of code is executed with the MMU enabled. * The following fragment of code is executed with the MMU enabled.
* *
* x0 = __PHYS_OFFSET * x0 = __PHYS_OFFSET
*/ */
SYM_FUNC_START_LOCAL(__primary_switched) SYM_FUNC_START_LOCAL(__primary_switched)
adrp x4, init_thread_union adr_l x4, init_task
add sp, x4, #THREAD_SIZE init_cpu_task x4, x5, x6
adr_l x5, init_task
msr sp_el0, x5 // Save thread_info
adr_l x8, vectors // load VBAR_EL1 with virtual adr_l x8, vectors // load VBAR_EL1 with virtual
msr vbar_el1, x8 // vector table address msr vbar_el1, x8 // vector table address
isb isb
stp xzr, x30, [sp, #-16]! stp x29, x30, [sp, #-16]!
mov x29, sp mov x29, sp
#ifdef CONFIG_SHADOW_CALL_STACK
adr_l scs_sp, init_shadow_call_stack // Set shadow call stack
#endif
str_l x21, __fdt_pointer, x5 // Save FDT pointer str_l x21, __fdt_pointer, x5 // Save FDT pointer
ldr_l x4, kimage_vaddr // Save the offset between ldr_l x4, kimage_vaddr // Save the offset between
@ -446,10 +463,9 @@ SYM_FUNC_START_LOCAL(__primary_switched)
0: 0:
#endif #endif
bl switch_to_vhe // Prefer VHE if possible bl switch_to_vhe // Prefer VHE if possible
add sp, sp, #16 ldp x29, x30, [sp], #16
mov x29, #0 bl start_kernel
mov x30, #0 ASM_BUG()
b start_kernel
SYM_FUNC_END(__primary_switched) SYM_FUNC_END(__primary_switched)
.pushsection ".rodata", "a" .pushsection ".rodata", "a"
@ -551,7 +567,7 @@ SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag)
cmp w0, #BOOT_CPU_MODE_EL2 cmp w0, #BOOT_CPU_MODE_EL2
b.ne 1f b.ne 1f
add x1, x1, #4 add x1, x1, #4
1: str w0, [x1] // This CPU has booted in EL1 1: str w0, [x1] // Save CPU boot mode
dmb sy dmb sy
dc ivac, x1 // Invalidate potentially stale cache line dc ivac, x1 // Invalidate potentially stale cache line
ret ret
@ -632,21 +648,17 @@ SYM_FUNC_START_LOCAL(__secondary_switched)
isb isb
adr_l x0, secondary_data adr_l x0, secondary_data
ldr x1, [x0, #CPU_BOOT_STACK] // get secondary_data.stack
cbz x1, __secondary_too_slow
mov sp, x1
ldr x2, [x0, #CPU_BOOT_TASK] ldr x2, [x0, #CPU_BOOT_TASK]
cbz x2, __secondary_too_slow cbz x2, __secondary_too_slow
msr sp_el0, x2
scs_load x2, x3 init_cpu_task x2, x1, x3
mov x29, #0
mov x30, #0
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
ptrauth_keys_init_cpu x2, x3, x4, x5 ptrauth_keys_init_cpu x2, x3, x4, x5
#endif #endif
b secondary_start_kernel bl secondary_start_kernel
ASM_BUG()
SYM_FUNC_END(__secondary_switched) SYM_FUNC_END(__secondary_switched)
SYM_FUNC_START_LOCAL(__secondary_too_slow) SYM_FUNC_START_LOCAL(__secondary_too_slow)

View File

@ -45,7 +45,7 @@
* Because this code has to be copied to a 'safe' page, it can't call out to * Because this code has to be copied to a 'safe' page, it can't call out to
* other functions by PC-relative address. Also remember that it may be * other functions by PC-relative address. Also remember that it may be
* mid-way through over-writing other functions. For this reason it contains * mid-way through over-writing other functions. For this reason it contains
* code from flush_icache_range() and uses the copy_page() macro. * code from caches_clean_inval_pou() and uses the copy_page() macro.
* *
* This 'safe' page is mapped via ttbr0, and executed from there. This function * This 'safe' page is mapped via ttbr0, and executed from there. This function
* switches to a copy of the linear map in ttbr1, performs the restore, then * switches to a copy of the linear map in ttbr1, performs the restore, then
@ -87,11 +87,12 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
copy_page x0, x1, x2, x3, x4, x5, x6, x7, x8, x9 copy_page x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
add x1, x10, #PAGE_SIZE add x1, x10, #PAGE_SIZE
/* Clean the copied page to PoU - based on flush_icache_range() */ /* Clean the copied page to PoU - based on caches_clean_inval_pou() */
raw_dcache_line_size x2, x3 raw_dcache_line_size x2, x3
sub x3, x2, #1 sub x3, x2, #1
bic x4, x10, x3 bic x4, x10, x3
2: dc cvau, x4 /* clean D line / unified line */ 2: /* clean D line / unified line */
alternative_insn "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
add x4, x4, x2 add x4, x4, x2
cmp x4, x1 cmp x4, x1
b.lo 2b b.lo 2b

View File

@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
return -ENOMEM; return -ENOMEM;
memcpy(page, src_start, length); memcpy(page, src_start, length);
__flush_icache_range((unsigned long)page, (unsigned long)page + length); caches_clean_inval_pou((unsigned long)page, (unsigned long)page + length);
rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page); rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
if (rc) if (rc)
return rc; return rc;
@ -240,8 +240,6 @@ static int create_safe_exec_page(void *src_start, size_t length,
return 0; return 0;
} }
#define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start))
#ifdef CONFIG_ARM64_MTE #ifdef CONFIG_ARM64_MTE
static DEFINE_XARRAY(mte_pages); static DEFINE_XARRAY(mte_pages);
@ -383,13 +381,18 @@ int swsusp_arch_suspend(void)
ret = swsusp_save(); ret = swsusp_save();
} else { } else {
/* Clean kernel core startup/idle code to PoC*/ /* Clean kernel core startup/idle code to PoC*/
dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end); dcache_clean_inval_poc((unsigned long)__mmuoff_data_start,
dcache_clean_range(__idmap_text_start, __idmap_text_end); (unsigned long)__mmuoff_data_end);
dcache_clean_inval_poc((unsigned long)__idmap_text_start,
(unsigned long)__idmap_text_end);
/* Clean kvm setup code to PoC? */ /* Clean kvm setup code to PoC? */
if (el2_reset_needed()) { if (el2_reset_needed()) {
dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end); dcache_clean_inval_poc(
dcache_clean_range(__hyp_text_start, __hyp_text_end); (unsigned long)__hyp_idmap_text_start,
(unsigned long)__hyp_idmap_text_end);
dcache_clean_inval_poc((unsigned long)__hyp_text_start,
(unsigned long)__hyp_text_end);
} }
swsusp_mte_restore_tags(); swsusp_mte_restore_tags();
@ -474,7 +477,8 @@ int swsusp_arch_resume(void)
* The hibernate exit text contains a set of el2 vectors, that will * The hibernate exit text contains a set of el2 vectors, that will
* be executed at el2 with the mmu off in order to reload hyp-stub. * be executed at el2 with the mmu off in order to reload hyp-stub.
*/ */
__flush_dcache_area(hibernate_exit, exit_size); dcache_clean_inval_poc((unsigned long)hibernate_exit,
(unsigned long)hibernate_exit + exit_size);
/* /*
* KASLR will cause the el2 vectors to be in a different location in * KASLR will cause the el2 vectors to be in a different location in

46
arch/arm64/kernel/idle.c Normal file
View File

@ -0,0 +1,46 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Low-level idle sequences
*/
#include <linux/cpu.h>
#include <linux/irqflags.h>
#include <asm/barrier.h>
#include <asm/cpuidle.h>
#include <asm/cpufeature.h>
#include <asm/sysreg.h>
/*
* cpu_do_idle()
*
* Idle the processor (wait for interrupt).
*
* If the CPU supports priority masking we must do additional work to
* ensure that interrupts are not masked at the PMR (because the core will
* not wake up if we block the wake up signal in the interrupt controller).
*/
void noinstr cpu_do_idle(void)
{
struct arm_cpuidle_irq_context context;
arm_cpuidle_save_irq_context(&context);
dsb(sy);
wfi();
arm_cpuidle_restore_irq_context(&context);
}
/*
* This is our default idle handler.
*/
void noinstr arch_cpu_idle(void)
{
/*
* This should do all the clock switching and wait for interrupt
* tricks
*/
cpu_do_idle();
raw_local_irq_enable();
}

View File

@ -237,7 +237,8 @@ asmlinkage void __init init_feature_override(void)
for (i = 0; i < ARRAY_SIZE(regs); i++) { for (i = 0; i < ARRAY_SIZE(regs); i++) {
if (regs[i]->override) if (regs[i]->override)
__flush_dcache_area(regs[i]->override, dcache_clean_inval_poc((unsigned long)regs[i]->override,
(unsigned long)regs[i]->override +
sizeof(*regs[i]->override)); sizeof(*regs[i]->override));
} }
} }

View File

@ -35,7 +35,7 @@ __efistub_strnlen = __pi_strnlen;
__efistub_strcmp = __pi_strcmp; __efistub_strcmp = __pi_strcmp;
__efistub_strncmp = __pi_strncmp; __efistub_strncmp = __pi_strncmp;
__efistub_strrchr = __pi_strrchr; __efistub_strrchr = __pi_strrchr;
__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc; __efistub_dcache_clean_poc = __pi_dcache_clean_poc;
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
__efistub___memcpy = __pi_memcpy; __efistub___memcpy = __pi_memcpy;

View File

@ -8,6 +8,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/jump_label.h> #include <linux/jump_label.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/patching.h>
void arch_jump_label_transform(struct jump_entry *entry, void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type) enum jump_label_type type)

View File

@ -72,7 +72,9 @@ u64 __init kaslr_early_init(void)
* we end up running with module randomization disabled. * we end up running with module randomization disabled.
*/ */
module_alloc_base = (u64)_etext - MODULES_VSIZE; module_alloc_base = (u64)_etext - MODULES_VSIZE;
__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base)); dcache_clean_inval_poc((unsigned long)&module_alloc_base,
(unsigned long)&module_alloc_base +
sizeof(module_alloc_base));
/* /*
* Try to map the FDT early. If this fails, we simply bail, * Try to map the FDT early. If this fails, we simply bail,
@ -170,8 +172,12 @@ u64 __init kaslr_early_init(void)
module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21; module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
module_alloc_base &= PAGE_MASK; module_alloc_base &= PAGE_MASK;
__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base)); dcache_clean_inval_poc((unsigned long)&module_alloc_base,
__flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed)); (unsigned long)&module_alloc_base +
sizeof(module_alloc_base));
dcache_clean_inval_poc((unsigned long)&memstart_offset_seed,
(unsigned long)&memstart_offset_seed +
sizeof(memstart_offset_seed));
return offset; return offset;
} }

View File

@ -17,6 +17,7 @@
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/patching.h>
#include <asm/traps.h> #include <asm/traps.h>
struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = { struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = {

View File

@ -68,10 +68,16 @@ int machine_kexec_post_load(struct kimage *kimage)
kimage->arch.kern_reloc = __pa(reloc_code); kimage->arch.kern_reloc = __pa(reloc_code);
kexec_image_info(kimage); kexec_image_info(kimage);
/* Flush the reloc_code in preparation for its execution. */ /*
__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size); * For execution with the MMU off, reloc_code needs to be cleaned to the
flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code + * PoC and invalidated from the I-cache.
arm64_relocate_new_kernel_size); */
dcache_clean_inval_poc((unsigned long)reloc_code,
(unsigned long)reloc_code +
arm64_relocate_new_kernel_size);
icache_inval_pou((uintptr_t)reloc_code,
(uintptr_t)reloc_code +
arm64_relocate_new_kernel_size);
return 0; return 0;
} }
@ -102,16 +108,18 @@ static void kexec_list_flush(struct kimage *kimage)
for (entry = &kimage->head; ; entry++) { for (entry = &kimage->head; ; entry++) {
unsigned int flag; unsigned int flag;
void *addr; unsigned long addr;
/* flush the list entries. */ /* flush the list entries. */
__flush_dcache_area(entry, sizeof(kimage_entry_t)); dcache_clean_inval_poc((unsigned long)entry,
(unsigned long)entry +
sizeof(kimage_entry_t));
flag = *entry & IND_FLAGS; flag = *entry & IND_FLAGS;
if (flag == IND_DONE) if (flag == IND_DONE)
break; break;
addr = phys_to_virt(*entry & PAGE_MASK); addr = (unsigned long)phys_to_virt(*entry & PAGE_MASK);
switch (flag) { switch (flag) {
case IND_INDIRECTION: case IND_INDIRECTION:
@ -120,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
break; break;
case IND_SOURCE: case IND_SOURCE:
/* flush the source pages. */ /* flush the source pages. */
__flush_dcache_area(addr, PAGE_SIZE); dcache_clean_inval_poc(addr, addr + PAGE_SIZE);
break; break;
case IND_DESTINATION: case IND_DESTINATION:
break; break;
@ -147,8 +155,10 @@ static void kexec_segment_flush(const struct kimage *kimage)
kimage->segment[i].memsz, kimage->segment[i].memsz,
kimage->segment[i].memsz / PAGE_SIZE); kimage->segment[i].memsz / PAGE_SIZE);
__flush_dcache_area(phys_to_virt(kimage->segment[i].mem), dcache_clean_inval_poc(
kimage->segment[i].memsz); (unsigned long)phys_to_virt(kimage->segment[i].mem),
(unsigned long)phys_to_virt(kimage->segment[i].mem) +
kimage->segment[i].memsz);
} }
} }

View File

@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/stop_machine.h>
#include <linux/uaccess.h>
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
#include <asm/insn.h>
#include <asm/kprobes.h>
#include <asm/patching.h>
#include <asm/sections.h>
static DEFINE_RAW_SPINLOCK(patch_lock);
static bool is_exit_text(unsigned long addr)
{
/* discarded with init text/data */
return system_state < SYSTEM_RUNNING &&
addr >= (unsigned long)__exittext_begin &&
addr < (unsigned long)__exittext_end;
}
static bool is_image_text(unsigned long addr)
{
return core_kernel_text(addr) || is_exit_text(addr);
}
static void __kprobes *patch_map(void *addr, int fixmap)
{
unsigned long uintaddr = (uintptr_t) addr;
bool image = is_image_text(uintaddr);
struct page *page;
if (image)
page = phys_to_page(__pa_symbol(addr));
else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
page = vmalloc_to_page(addr);
else
return addr;
BUG_ON(!page);
return (void *)set_fixmap_offset(fixmap, page_to_phys(page) +
(uintaddr & ~PAGE_MASK));
}
static void __kprobes patch_unmap(int fixmap)
{
clear_fixmap(fixmap);
}
/*
* In ARMv8-A, A64 instructions have a fixed length of 32 bits and are always
* little-endian.
*/
int __kprobes aarch64_insn_read(void *addr, u32 *insnp)
{
int ret;
__le32 val;
ret = copy_from_kernel_nofault(&val, addr, AARCH64_INSN_SIZE);
if (!ret)
*insnp = le32_to_cpu(val);
return ret;
}
static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
{
void *waddr = addr;
unsigned long flags = 0;
int ret;
raw_spin_lock_irqsave(&patch_lock, flags);
waddr = patch_map(addr, FIX_TEXT_POKE0);
ret = copy_to_kernel_nofault(waddr, &insn, AARCH64_INSN_SIZE);
patch_unmap(FIX_TEXT_POKE0);
raw_spin_unlock_irqrestore(&patch_lock, flags);
return ret;
}
int __kprobes aarch64_insn_write(void *addr, u32 insn)
{
return __aarch64_insn_write(addr, cpu_to_le32(insn));
}
int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
u32 *tp = addr;
int ret;
/* A64 instructions must be word aligned */
if ((uintptr_t)tp & 0x3)
return -EINVAL;
ret = aarch64_insn_write(tp, insn);
if (ret == 0)
caches_clean_inval_pou((uintptr_t)tp,
(uintptr_t)tp + AARCH64_INSN_SIZE);
return ret;
}
struct aarch64_insn_patch {
void **text_addrs;
u32 *new_insns;
int insn_cnt;
atomic_t cpu_count;
};
static int __kprobes aarch64_insn_patch_text_cb(void *arg)
{
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
/* The first CPU becomes master */
if (atomic_inc_return(&pp->cpu_count) == 1) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
/* Notify other processors with an additional increment. */
atomic_inc(&pp->cpu_count);
} else {
while (atomic_read(&pp->cpu_count) <= num_online_cpus())
cpu_relax();
isb();
}
return ret;
}
int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt)
{
struct aarch64_insn_patch patch = {
.text_addrs = addrs,
.new_insns = insns,
.insn_cnt = cnt,
.cpu_count = ATOMIC_INIT(0),
};
if (cnt <= 0)
return -EINVAL;
return stop_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch,
cpu_online_mask);
}

View File

@ -116,7 +116,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
tail = (struct frame_tail __user *)regs->regs[29]; tail = (struct frame_tail __user *)regs->regs[29];
while (entry->nr < entry->max_stack && while (entry->nr < entry->max_stack &&
tail && !((unsigned long)tail & 0xf)) tail && !((unsigned long)tail & 0x7))
tail = user_backtrace(tail, entry); tail = user_backtrace(tail, entry);
} else { } else {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT

View File

@ -165,10 +165,7 @@ armv8pmu_events_sysfs_show(struct device *dev,
} }
#define ARMV8_EVENT_ATTR(name, config) \ #define ARMV8_EVENT_ATTR(name, config) \
(&((struct perf_pmu_events_attr) { \ PMU_EVENT_ATTR_ID(name, armv8pmu_events_sysfs_show, config)
.attr = __ATTR(name, 0444, armv8pmu_events_sysfs_show, NULL), \
.id = config, \
}).attr.attr)
static struct attribute *armv8_pmuv3_event_attrs[] = { static struct attribute *armv8_pmuv3_event_attrs[] = {
ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR), ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR),
@ -312,13 +309,46 @@ static ssize_t slots_show(struct device *dev, struct device_attribute *attr,
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu); struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK; u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK;
return snprintf(page, PAGE_SIZE, "0x%08x\n", slots); return sysfs_emit(page, "0x%08x\n", slots);
} }
static DEVICE_ATTR_RO(slots); static DEVICE_ATTR_RO(slots);
static ssize_t bus_slots_show(struct device *dev, struct device_attribute *attr,
char *page)
{
struct pmu *pmu = dev_get_drvdata(dev);
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
u32 bus_slots = (cpu_pmu->reg_pmmir >> ARMV8_PMU_BUS_SLOTS_SHIFT)
& ARMV8_PMU_BUS_SLOTS_MASK;
return sysfs_emit(page, "0x%08x\n", bus_slots);
}
static DEVICE_ATTR_RO(bus_slots);
static ssize_t bus_width_show(struct device *dev, struct device_attribute *attr,
char *page)
{
struct pmu *pmu = dev_get_drvdata(dev);
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
u32 bus_width = (cpu_pmu->reg_pmmir >> ARMV8_PMU_BUS_WIDTH_SHIFT)
& ARMV8_PMU_BUS_WIDTH_MASK;
u32 val = 0;
/* Encoded as Log2(number of bytes), plus one */
if (bus_width > 2 && bus_width < 13)
val = 1 << (bus_width - 1);
return sysfs_emit(page, "0x%08x\n", val);
}
static DEVICE_ATTR_RO(bus_width);
static struct attribute *armv8_pmuv3_caps_attrs[] = { static struct attribute *armv8_pmuv3_caps_attrs[] = {
&dev_attr_slots.attr, &dev_attr_slots.attr,
&dev_attr_bus_slots.attr,
&dev_attr_bus_width.attr,
NULL, NULL,
}; };

View File

@ -7,26 +7,28 @@
* Copyright (C) 2013 Linaro Limited. * Copyright (C) 2013 Linaro Limited.
* Author: Sandeepa Prabhu <sandeepa.prabhu@linaro.org> * Author: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
*/ */
#include <linux/extable.h>
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/extable.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
#include <linux/sched/debug.h> #include <linux/sched/debug.h>
#include <linux/set_memory.h> #include <linux/set_memory.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
#include <linux/stringify.h> #include <linux/stringify.h>
#include <linux/vmalloc.h>
#include <asm/traps.h>
#include <asm/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/debug-monitors.h>
#include <asm/daifflags.h>
#include <asm/system_misc.h>
#include <asm/insn.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/vmalloc.h>
#include <asm/cacheflush.h>
#include <asm/daifflags.h>
#include <asm/debug-monitors.h>
#include <asm/insn.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/patching.h>
#include <asm/ptrace.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/system_misc.h>
#include <asm/traps.h>
#include "decode-insn.h" #include "decode-insn.h"

View File

@ -10,6 +10,7 @@
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/traps.h>
#include "simulate-insn.h" #include "simulate-insn.h"

View File

@ -21,7 +21,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
memcpy(dst, src, len); memcpy(dst, src, len);
/* flush caches (dcache/icache) */ /* flush caches (dcache/icache) */
sync_icache_aliases(dst, len); sync_icache_aliases((unsigned long)dst, (unsigned long)dst + len);
kunmap_atomic(xol_page_kaddr); kunmap_atomic(xol_page_kaddr);
} }

View File

@ -18,7 +18,6 @@
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/lockdep.h>
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/nospec.h> #include <linux/nospec.h>
@ -46,7 +45,6 @@
#include <linux/prctl.h> #include <linux/prctl.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/arch_gicv3.h>
#include <asm/compat.h> #include <asm/compat.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
@ -74,63 +72,6 @@ EXPORT_SYMBOL_GPL(pm_power_off);
void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
static void noinstr __cpu_do_idle(void)
{
dsb(sy);
wfi();
}
static void noinstr __cpu_do_idle_irqprio(void)
{
unsigned long pmr;
unsigned long daif_bits;
daif_bits = read_sysreg(daif);
write_sysreg(daif_bits | PSR_I_BIT | PSR_F_BIT, daif);
/*
* Unmask PMR before going idle to make sure interrupts can
* be raised.
*/
pmr = gic_read_pmr();
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
__cpu_do_idle();
gic_write_pmr(pmr);
write_sysreg(daif_bits, daif);
}
/*
* cpu_do_idle()
*
* Idle the processor (wait for interrupt).
*
* If the CPU supports priority masking we must do additional work to
* ensure that interrupts are not masked at the PMR (because the core will
* not wake up if we block the wake up signal in the interrupt controller).
*/
void noinstr cpu_do_idle(void)
{
if (system_uses_irq_prio_masking())
__cpu_do_idle_irqprio();
else
__cpu_do_idle();
}
/*
* This is our default idle handler.
*/
void noinstr arch_cpu_idle(void)
{
/*
* This should do all the clock switching and wait for interrupt
* tricks
*/
cpu_do_idle();
raw_local_irq_enable();
}
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
void arch_cpu_idle_dead(void) void arch_cpu_idle_dead(void)
{ {
@ -435,6 +376,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
} }
p->thread.cpu_context.pc = (unsigned long)ret_from_fork; p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
p->thread.cpu_context.sp = (unsigned long)childregs; p->thread.cpu_context.sp = (unsigned long)childregs;
/*
* For the benefit of the unwinder, set up childregs->stackframe
* as the final frame for the new task.
*/
p->thread.cpu_context.fp = (unsigned long)childregs->stackframe;
ptrace_hw_copy_thread(p); ptrace_hw_copy_thread(p);
@ -527,6 +473,15 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
write_sysreg(val, cntkctl_el1); write_sysreg(val, cntkctl_el1);
} }
static void compat_thread_switch(struct task_struct *next)
{
if (!is_compat_thread(task_thread_info(next)))
return;
if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
set_tsk_thread_flag(next, TIF_NOTIFY_RESUME);
}
static void update_sctlr_el1(u64 sctlr) static void update_sctlr_el1(u64 sctlr)
{ {
/* /*
@ -568,6 +523,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
ssbs_thread_switch(next); ssbs_thread_switch(next);
erratum_1418040_thread_switch(prev, next); erratum_1418040_thread_switch(prev, next);
ptrauth_thread_switch_user(next); ptrauth_thread_switch_user(next);
compat_thread_switch(next);
/* /*
* Complete any pending TLB or cache maintenance on this CPU in case * Complete any pending TLB or cache maintenance on this CPU in case
@ -633,8 +589,15 @@ unsigned long arch_align_stack(unsigned long sp)
*/ */
void arch_setup_new_exec(void) void arch_setup_new_exec(void)
{ {
current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0; unsigned long mmflags = 0;
if (is_compat_task()) {
mmflags = MMCF_AARCH32;
if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
set_tsk_thread_flag(current, TIF_NOTIFY_RESUME);
}
current->mm->context.flags = mmflags;
ptrauth_thread_init_user(); ptrauth_thread_init_user();
mte_thread_init_user(); mte_thread_init_user();
@ -724,22 +687,6 @@ static int __init tagged_addr_init(void)
core_initcall(tagged_addr_init); core_initcall(tagged_addr_init);
#endif /* CONFIG_ARM64_TAGGED_ADDR_ABI */ #endif /* CONFIG_ARM64_TAGGED_ADDR_ABI */
asmlinkage void __sched arm64_preempt_schedule_irq(void)
{
lockdep_assert_irqs_disabled();
/*
* Preempting a task from an IRQ means we leave copies of PSTATE
* on the stack. cpufeature's enable calls may modify PSTATE, but
* resuming one of these preempted tasks would undo those changes.
*
* Only allow a task to be preempted once cpufeatures have been
* enabled.
*/
if (system_capabilities_finalized())
preempt_schedule_irq();
}
#ifdef CONFIG_BINFMT_ELF #ifdef CONFIG_BINFMT_ELF
int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state, int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state,
bool has_interp, bool is_interp) bool has_interp, bool is_interp)

View File

@ -122,7 +122,7 @@ static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
{ {
return ((addr & ~(THREAD_SIZE - 1)) == return ((addr & ~(THREAD_SIZE - 1)) ==
(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) || (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
on_irq_stack(addr, NULL); on_irq_stack(addr, sizeof(unsigned long), NULL);
} }
/** /**

View File

@ -162,31 +162,33 @@ static int init_sdei_scs(void)
return err; return err;
} }
static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info) static bool on_sdei_normal_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_SDEI_NORMAL, info); return on_stack(sp, size, low, high, STACK_TYPE_SDEI_NORMAL, info);
} }
static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info) static bool on_sdei_critical_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_SDEI_CRITICAL, info); return on_stack(sp, size, low, high, STACK_TYPE_SDEI_CRITICAL, info);
} }
bool _on_sdei_stack(unsigned long sp, struct stack_info *info) bool _on_sdei_stack(unsigned long sp, unsigned long size, struct stack_info *info)
{ {
if (!IS_ENABLED(CONFIG_VMAP_STACK)) if (!IS_ENABLED(CONFIG_VMAP_STACK))
return false; return false;
if (on_sdei_critical_stack(sp, info)) if (on_sdei_critical_stack(sp, size, info))
return true; return true;
if (on_sdei_normal_stack(sp, info)) if (on_sdei_normal_stack(sp, size, info))
return true; return true;
return false; return false;
@ -231,13 +233,13 @@ out_err:
} }
/* /*
* __sdei_handler() returns one of: * do_sdei_event() returns one of:
* SDEI_EV_HANDLED - success, return to the interrupted context. * SDEI_EV_HANDLED - success, return to the interrupted context.
* SDEI_EV_FAILED - failure, return this error code to firmare. * SDEI_EV_FAILED - failure, return this error code to firmare.
* virtual-address - success, return to this address. * virtual-address - success, return to this address.
*/ */
static __kprobes unsigned long _sdei_handler(struct pt_regs *regs, unsigned long __kprobes do_sdei_event(struct pt_regs *regs,
struct sdei_registered_event *arg) struct sdei_registered_event *arg)
{ {
u32 mode; u32 mode;
int i, err = 0; int i, err = 0;
@ -292,45 +294,3 @@ static __kprobes unsigned long _sdei_handler(struct pt_regs *regs,
return vbar + 0x480; return vbar + 0x480;
} }
static void __kprobes notrace __sdei_pstate_entry(void)
{
/*
* The original SDEI spec (ARM DEN 0054A) can be read ambiguously as to
* whether PSTATE bits are inherited unchanged or generated from
* scratch, and the TF-A implementation always clears PAN and always
* clears UAO. There are no other known implementations.
*
* Subsequent revisions (ARM DEN 0054B) follow the usual rules for how
* PSTATE is modified upon architectural exceptions, and so PAN is
* either inherited or set per SCTLR_ELx.SPAN, and UAO is always
* cleared.
*
* We must explicitly reset PAN to the expected state, including
* clearing it when the host isn't using it, in case a VM had it set.
*/
if (system_uses_hw_pan())
set_pstate_pan(1);
else if (cpu_has_pan())
set_pstate_pan(0);
}
asmlinkage noinstr unsigned long
__sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
{
unsigned long ret;
/*
* We didn't take an exception to get here, so the HW hasn't
* set/cleared bits in PSTATE that we may rely on. Initialize PAN.
*/
__sdei_pstate_entry();
arm64_enter_nmi(regs);
ret = _sdei_handler(regs, arg);
arm64_exit_nmi(regs);
return ret;
}

View File

@ -87,12 +87,6 @@ void __init smp_setup_processor_id(void)
u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
set_cpu_logical_map(0, mpidr); set_cpu_logical_map(0, mpidr);
/*
* clear __my_cpu_offset on boot CPU to avoid hang caused by
* using percpu variable early, for example, lockdep will
* access percpu variable inside lock_release
*/
set_my_cpu_offset(0);
pr_info("Booting Linux on physical CPU 0x%010lx [0x%08x]\n", pr_info("Booting Linux on physical CPU 0x%010lx [0x%08x]\n",
(unsigned long)mpidr, read_cpuid_id()); (unsigned long)mpidr, read_cpuid_id());
} }
@ -381,7 +375,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
* faults in case uaccess_enable() is inadvertently called by the init * faults in case uaccess_enable() is inadvertently called by the init
* thread. * thread.
*/ */
init_task.thread_info.ttbr0 = __pa_symbol(reserved_pg_dir); init_task.thread_info.ttbr0 = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
#endif #endif
if (boot_args[1] || boot_args[2] || boot_args[3]) { if (boot_args[1] || boot_args[2] || boot_args[3]) {

View File

@ -911,6 +911,19 @@ static void do_signal(struct pt_regs *regs)
restore_saved_sigmask(); restore_saved_sigmask();
} }
static bool cpu_affinity_invalid(struct pt_regs *regs)
{
if (!compat_user_mode(regs))
return false;
/*
* We're preemptible, but a reschedule will cause us to check the
* affinity again.
*/
return !cpumask_test_cpu(raw_smp_processor_id(),
system_32bit_el0_cpumask());
}
asmlinkage void do_notify_resume(struct pt_regs *regs, asmlinkage void do_notify_resume(struct pt_regs *regs,
unsigned long thread_flags) unsigned long thread_flags)
{ {
@ -938,6 +951,19 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
if (thread_flags & _TIF_NOTIFY_RESUME) { if (thread_flags & _TIF_NOTIFY_RESUME) {
tracehook_notify_resume(regs); tracehook_notify_resume(regs);
rseq_handle_notify_resume(NULL, regs); rseq_handle_notify_resume(NULL, regs);
/*
* If we reschedule after checking the affinity
* then we must ensure that TIF_NOTIFY_RESUME
* is set so that we check the affinity again.
* Since tracehook_notify_resume() clears the
* flag, ensure that the compiler doesn't move
* it after the affinity check.
*/
barrier();
if (cpu_affinity_invalid(regs))
force_sig(SIGKILL);
} }
if (thread_flags & _TIF_FOREIGN_FPSTATE) if (thread_flags & _TIF_FOREIGN_FPSTATE)

View File

@ -7,8 +7,34 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/thread_info.h>
/*
* If we have SMCCC v1.3 and (as is likely) no SVE state in
* the registers then set the SMCCC hint bit to say there's no
* need to preserve it. Do this by directly adjusting the SMCCC
* function value which is already stored in x0 ready to be called.
*/
SYM_FUNC_START(__arm_smccc_sve_check)
ldr_l x16, smccc_has_sve_hint
cbz x16, 2f
get_current_task x16
ldr x16, [x16, #TSK_TI_FLAGS]
tbnz x16, #TIF_FOREIGN_FPSTATE, 1f // Any live FP state?
tbnz x16, #TIF_SVE, 2f // Does that state include SVE?
1: orr x0, x0, ARM_SMCCC_1_3_SVE_HINT
2: ret
SYM_FUNC_END(__arm_smccc_sve_check)
EXPORT_SYMBOL(__arm_smccc_sve_check)
.macro SMCCC instr .macro SMCCC instr
alternative_if ARM64_SVE
bl __arm_smccc_sve_check
alternative_else_nop_endif
\instr #0 \instr #0
ldr x4, [sp] ldr x4, [sp]
stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS] stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS]
@ -43,3 +69,60 @@ SYM_FUNC_START(__arm_smccc_hvc)
SMCCC hvc SMCCC hvc
SYM_FUNC_END(__arm_smccc_hvc) SYM_FUNC_END(__arm_smccc_hvc)
EXPORT_SYMBOL(__arm_smccc_hvc) EXPORT_SYMBOL(__arm_smccc_hvc)
.macro SMCCC_1_2 instr
/* Save `res` and free a GPR that won't be clobbered */
stp x1, x19, [sp, #-16]!
/* Ensure `args` won't be clobbered while loading regs in next step */
mov x19, x0
/* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
ldp x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
ldp x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
ldp x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
ldp x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
ldp x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
ldp x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
ldp x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
ldp x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
ldp x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
\instr #0
/* Load the `res` from the stack */
ldr x19, [sp]
/* Store the registers x0 - x17 into the result structure */
stp x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
stp x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
stp x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
stp x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
stp x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
stp x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
stp x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
stp x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
stp x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
/* Restore original x19 */
ldp xzr, x19, [sp], #16
ret
.endm
/*
* void arm_smccc_1_2_hvc(const struct arm_smccc_1_2_regs *args,
* struct arm_smccc_1_2_regs *res);
*/
SYM_FUNC_START(arm_smccc_1_2_hvc)
SMCCC_1_2 hvc
SYM_FUNC_END(arm_smccc_1_2_hvc)
EXPORT_SYMBOL(arm_smccc_1_2_hvc)
/*
* void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
* struct arm_smccc_1_2_regs *res);
*/
SYM_FUNC_START(arm_smccc_1_2_smc)
SMCCC_1_2 smc
SYM_FUNC_END(arm_smccc_1_2_smc)
EXPORT_SYMBOL(arm_smccc_1_2_smc)

View File

@ -120,9 +120,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
* page tables. * page tables.
*/ */
secondary_data.task = idle; secondary_data.task = idle;
secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
update_cpu_boot_status(CPU_MMU_OFF); update_cpu_boot_status(CPU_MMU_OFF);
__flush_dcache_area(&secondary_data, sizeof(secondary_data));
/* Now bring the CPU into our world */ /* Now bring the CPU into our world */
ret = boot_secondary(cpu, idle); ret = boot_secondary(cpu, idle);
@ -142,8 +140,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
pr_crit("CPU%u: failed to come online\n", cpu); pr_crit("CPU%u: failed to come online\n", cpu);
secondary_data.task = NULL; secondary_data.task = NULL;
secondary_data.stack = NULL;
__flush_dcache_area(&secondary_data, sizeof(secondary_data));
status = READ_ONCE(secondary_data.status); status = READ_ONCE(secondary_data.status);
if (status == CPU_MMU_OFF) if (status == CPU_MMU_OFF)
status = READ_ONCE(__early_cpu_boot_status); status = READ_ONCE(__early_cpu_boot_status);
@ -202,10 +198,7 @@ asmlinkage notrace void secondary_start_kernel(void)
u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
struct mm_struct *mm = &init_mm; struct mm_struct *mm = &init_mm;
const struct cpu_operations *ops; const struct cpu_operations *ops;
unsigned int cpu; unsigned int cpu = smp_processor_id();
cpu = task_cpu(current);
set_my_cpu_offset(per_cpu_offset(cpu));
/* /*
* All kernel threads share the same mm context; grab a * All kernel threads share the same mm context; grab a
@ -351,7 +344,7 @@ void __cpu_die(unsigned int cpu)
pr_crit("CPU%u: cpu didn't die\n", cpu); pr_crit("CPU%u: cpu didn't die\n", cpu);
return; return;
} }
pr_notice("CPU%u: shutdown\n", cpu); pr_debug("CPU%u: shutdown\n", cpu);
/* /*
* Now that the dying CPU is beyond the point of no return w.r.t. * Now that the dying CPU is beyond the point of no return w.r.t.
@ -451,6 +444,11 @@ void __init smp_cpus_done(unsigned int max_cpus)
void __init smp_prepare_boot_cpu(void) void __init smp_prepare_boot_cpu(void)
{ {
/*
* The runtime per-cpu areas have been allocated by
* setup_per_cpu_areas(), and CPU0's boot time per-cpu area will be
* freed shortly, so we must move over to the runtime per-cpu area.
*/
set_my_cpu_offset(per_cpu_offset(smp_processor_id())); set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
cpuinfo_store_boot_cpu(); cpuinfo_store_boot_cpu();

View File

@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
unsigned long size = sizeof(secondary_holding_pen_release); unsigned long size = sizeof(secondary_holding_pen_release);
secondary_holding_pen_release = val; secondary_holding_pen_release = val;
__flush_dcache_area(start, size); dcache_clean_inval_poc((unsigned long)start, (unsigned long)start + size);
} }
@ -90,8 +90,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
* the boot protocol. * the boot protocol.
*/ */
writeq_relaxed(pa_holding_pen, release_addr); writeq_relaxed(pa_holding_pen, release_addr);
__flush_dcache_area((__force void *)release_addr, dcache_clean_inval_poc((__force unsigned long)release_addr,
sizeof(*release_addr)); (__force unsigned long)release_addr +
sizeof(*release_addr));
/* /*
* Send an event to wake up the secondary CPU. * Send an event to wake up the secondary CPU.

View File

@ -68,13 +68,17 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
unsigned long fp = frame->fp; unsigned long fp = frame->fp;
struct stack_info info; struct stack_info info;
if (fp & 0xf)
return -EINVAL;
if (!tsk) if (!tsk)
tsk = current; tsk = current;
if (!on_accessible_stack(tsk, fp, &info)) /* Final frame; nothing to unwind */
if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
return -ENOENT;
if (fp & 0x7)
return -EINVAL;
if (!on_accessible_stack(tsk, fp, 16, &info))
return -EINVAL; return -EINVAL;
if (test_bit(info.type, frame->stacks_done)) if (test_bit(info.type, frame->stacks_done))
@ -128,12 +132,6 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
frame->pc = ptrauth_strip_insn_pac(frame->pc); frame->pc = ptrauth_strip_insn_pac(frame->pc);
/*
* This is a terminal record, so we have finished unwinding.
*/
if (!frame->fp && !frame->pc)
return -ENOENT;
return 0; return 0;
} }
NOKPROBE_SYMBOL(unwind_frame); NOKPROBE_SYMBOL(unwind_frame);

View File

@ -7,6 +7,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cpuidle.h>
#include <asm/daifflags.h> #include <asm/daifflags.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/exec.h> #include <asm/exec.h>
@ -91,6 +92,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
int ret = 0; int ret = 0;
unsigned long flags; unsigned long flags;
struct sleep_stack_data state; struct sleep_stack_data state;
struct arm_cpuidle_irq_context context;
/* Report any MTE async fault before going to suspend */ /* Report any MTE async fault before going to suspend */
mte_suspend_enter(); mte_suspend_enter();
@ -103,12 +105,18 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
flags = local_daif_save(); flags = local_daif_save();
/* /*
* Function graph tracer state gets incosistent when the kernel * Function graph tracer state gets inconsistent when the kernel
* calls functions that never return (aka suspend finishers) hence * calls functions that never return (aka suspend finishers) hence
* disable graph tracing during their execution. * disable graph tracing during their execution.
*/ */
pause_graph_tracing(); pause_graph_tracing();
/*
* Switch to using DAIF.IF instead of PMR in order to reliably
* resume if we're using pseudo-NMIs.
*/
arm_cpuidle_save_irq_context(&context);
if (__cpu_suspend_enter(&state)) { if (__cpu_suspend_enter(&state)) {
/* Call the suspend finisher */ /* Call the suspend finisher */
ret = fn(arg); ret = fn(arg);
@ -126,6 +134,8 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
RCU_NONIDLE(__cpu_suspend_exit()); RCU_NONIDLE(__cpu_suspend_exit());
} }
arm_cpuidle_restore_irq_context(&context);
unpause_graph_tracing(); unpause_graph_tracing();
/* /*

View File

@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
dsb(ish); dsb(ish);
} }
ret = __flush_cache_user_range(start, start + chunk); ret = caches_clean_inval_user_pou(start, start + chunk);
if (ret) if (ret)
return ret; return ret;

View File

@ -38,6 +38,7 @@
#include <asm/extable.h> #include <asm/extable.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/kprobes.h> #include <asm/kprobes.h>
#include <asm/patching.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/stack_pointer.h> #include <asm/stack_pointer.h>
@ -45,11 +46,102 @@
#include <asm/system_misc.h> #include <asm/system_misc.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
static const char *handler[] = { static bool __kprobes __check_eq(unsigned long pstate)
"Synchronous Abort", {
"IRQ", return (pstate & PSR_Z_BIT) != 0;
"FIQ", }
"Error"
static bool __kprobes __check_ne(unsigned long pstate)
{
return (pstate & PSR_Z_BIT) == 0;
}
static bool __kprobes __check_cs(unsigned long pstate)
{
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_cc(unsigned long pstate)
{
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_mi(unsigned long pstate)
{
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_pl(unsigned long pstate)
{
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_vs(unsigned long pstate)
{
return (pstate & PSR_V_BIT) != 0;
}
static bool __kprobes __check_vc(unsigned long pstate)
{
return (pstate & PSR_V_BIT) == 0;
}
static bool __kprobes __check_hi(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_ls(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_ge(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_lt(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_gt(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) == 0;
}
static bool __kprobes __check_le(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) != 0;
}
static bool __kprobes __check_al(unsigned long pstate)
{
return true;
}
/*
* Note that the ARMv8 ARM calls condition code 0b1111 "nv", but states that
* it behaves identically to 0b1110 ("al").
*/
pstate_check_t * const aarch32_opcode_cond_checks[16] = {
__check_eq, __check_ne, __check_cs, __check_cc,
__check_mi, __check_pl, __check_vs, __check_vc,
__check_hi, __check_ls, __check_ge, __check_lt,
__check_gt, __check_le, __check_al, __check_al
}; };
int show_unhandled_signals = 0; int show_unhandled_signals = 0;
@ -750,28 +842,9 @@ const char *esr_get_class_string(u32 esr)
return esr_class_str[ESR_ELx_EC(esr)]; return esr_class_str[ESR_ELx_EC(esr)];
} }
/*
* bad_mode handles the impossible case in the exception vector. This is always
* fatal.
*/
asmlinkage void notrace bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
{
arm64_enter_nmi(regs);
console_verbose();
pr_crit("Bad mode in %s handler detected on CPU%d, code 0x%08x -- %s\n",
handler[reason], smp_processor_id(), esr,
esr_get_class_string(esr));
__show_regs(regs);
local_daif_mask();
panic("bad mode");
}
/* /*
* bad_el0_sync handles unexpected, but potentially recoverable synchronous * bad_el0_sync handles unexpected, but potentially recoverable synchronous
* exceptions taken from EL0. Unlike bad_mode, this returns. * exceptions taken from EL0.
*/ */
void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr) void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
{ {
@ -789,15 +862,11 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack) DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
__aligned(16); __aligned(16);
asmlinkage void noinstr handle_bad_stack(struct pt_regs *regs) void panic_bad_stack(struct pt_regs *regs, unsigned int esr, unsigned long far)
{ {
unsigned long tsk_stk = (unsigned long)current->stack; unsigned long tsk_stk = (unsigned long)current->stack;
unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr); unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr);
unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack); unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
unsigned int esr = read_sysreg(esr_el1);
unsigned long far = read_sysreg(far_el1);
arm64_enter_nmi(regs);
console_verbose(); console_verbose();
pr_emerg("Insufficient stack space to handle exception!"); pr_emerg("Insufficient stack space to handle exception!");
@ -870,15 +939,11 @@ bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr)
} }
} }
asmlinkage void noinstr do_serror(struct pt_regs *regs, unsigned int esr) void do_serror(struct pt_regs *regs, unsigned int esr)
{ {
arm64_enter_nmi(regs);
/* non-RAS errors are not containable */ /* non-RAS errors are not containable */
if (!arm64_is_ras_serror(esr) || arm64_is_fatal_ras_serror(regs, esr)) if (!arm64_is_ras_serror(esr) || arm64_is_fatal_ras_serror(regs, esr))
arm64_serror_panic(regs, esr); arm64_serror_panic(regs, esr);
arm64_exit_nmi(regs);
} }
/* GENERIC_BUG traps */ /* GENERIC_BUG traps */

View File

@ -692,6 +692,15 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
} }
} }
static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
{
if (likely(!vcpu_mode_is_32bit(vcpu)))
return false;
return !system_supports_32bit_el0() ||
static_branch_unlikely(&arm64_mismatched_32bit_el0);
}
/** /**
* kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
* @vcpu: The VCPU pointer * @vcpu: The VCPU pointer
@ -877,7 +886,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* with the asymmetric AArch32 case), return to userspace with * with the asymmetric AArch32 case), return to userspace with
* a fatal error. * a fatal error.
*/ */
if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { if (vcpu_mode_is_bad_32bit(vcpu)) {
/* /*
* As we have caught the guest red-handed, decide that * As we have caught the guest red-handed, decide that
* it isn't fit for purpose anymore by making the vcpu * it isn't fit for purpose anymore by making the vcpu
@ -1078,7 +1087,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
stage2_unmap_vm(vcpu->kvm); stage2_unmap_vm(vcpu->kvm);
else else
__flush_icache_all(); icache_inval_all_pou();
} }
vcpu_reset_hcr(vcpu); vcpu_reset_hcr(vcpu);

View File

@ -7,7 +7,7 @@
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/alternative.h> #include <asm/alternative.h>
SYM_FUNC_START_PI(__flush_dcache_area) SYM_FUNC_START_PI(dcache_clean_inval_poc)
dcache_by_line_op civac, sy, x0, x1, x2, x3 dcache_by_line_op civac, sy, x0, x1, x2, x3
ret ret
SYM_FUNC_END_PI(__flush_dcache_area) SYM_FUNC_END_PI(dcache_clean_inval_poc)

View File

@ -134,7 +134,8 @@ static void update_nvhe_init_params(void)
for (i = 0; i < hyp_nr_cpus; i++) { for (i = 0; i < hyp_nr_cpus; i++) {
params = per_cpu_ptr(&kvm_init_params, i); params = per_cpu_ptr(&kvm_init_params, i);
params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd); params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
__flush_dcache_area(params, sizeof(*params)); dcache_clean_inval_poc((unsigned long)params,
(unsigned long)params + sizeof(*params));
} }
} }

View File

@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
* you should be running with VHE enabled. * you should be running with VHE enabled.
*/ */
if (icache_is_vpipt()) if (icache_is_vpipt())
__flush_icache_all(); icache_inval_all_pou();
__tlb_switch_to_host(&cxt); __tlb_switch_to_host(&cxt);
} }

View File

@ -839,8 +839,11 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
stage2_put_pte(ptep, mmu, addr, level, mm_ops); stage2_put_pte(ptep, mmu, addr, level, mm_ops);
if (need_flush) { if (need_flush) {
__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
kvm_granule_size(level));
dcache_clean_inval_poc((unsigned long)pte_follow,
(unsigned long)pte_follow +
kvm_granule_size(level));
} }
if (childp) if (childp)
@ -988,11 +991,15 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct kvm_pgtable *pgt = arg; struct kvm_pgtable *pgt = arg;
struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
kvm_pte_t pte = *ptep; kvm_pte_t pte = *ptep;
kvm_pte_t *pte_follow;
if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte)) if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
return 0; return 0;
__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level)); pte_follow = kvm_pte_follow(pte, mm_ops);
dcache_clean_inval_poc((unsigned long)pte_follow,
(unsigned long)pte_follow +
kvm_granule_size(level));
return 0; return 0;
} }

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
lib-y := clear_user.o delay.o copy_from_user.o \ lib-y := clear_user.o delay.o copy_from_user.o \
copy_to_user.o copy_in_user.o copy_page.o \ copy_to_user.o copy_in_user.o copy_page.o \
clear_page.o csum.o memchr.o memcpy.o memmove.o \ clear_page.o csum.o insn.o memchr.o memcpy.o \
memset.o memcmp.o strcmp.o strncmp.o strlen.o \ memset.o memcmp.o strcmp.o strncmp.o strlen.o \
strnlen.o strchr.o strrchr.o tishift.o strnlen.o strchr.o strrchr.o tishift.o
@ -18,3 +18,5 @@ obj-$(CONFIG_CRC32) += crc32.o
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_ARM64_MTE) += mte.o
obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o

View File

@ -1,12 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* Based on arch/arm/lib/clear_user.S * Copyright (C) 2021 Arm Ltd.
*
* Copyright (C) 2012 ARM Ltd.
*/ */
#include <linux/linkage.h>
#include <asm/asm-uaccess.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
.text .text
@ -19,25 +16,33 @@
* *
* Alignment fixed up by hardware. * Alignment fixed up by hardware.
*/ */
.p2align 4
// Alignment is for the loop, but since the prologue (including BTI)
// is also 16 bytes we can keep any padding outside the function
SYM_FUNC_START(__arch_clear_user) SYM_FUNC_START(__arch_clear_user)
mov x2, x1 // save the size for fixup return add x2, x0, x1
subs x1, x1, #8 subs x1, x1, #8
b.mi 2f b.mi 2f
1: 1:
user_ldst 9f, sttr, xzr, x0, 8 USER(9f, sttr xzr, [x0])
add x0, x0, #8
subs x1, x1, #8 subs x1, x1, #8
b.pl 1b b.hi 1b
2: adds x1, x1, #4 USER(9f, sttr xzr, [x2, #-8])
b.mi 3f mov x0, #0
user_ldst 9f, sttr, wzr, x0, 4 ret
sub x1, x1, #4
3: adds x1, x1, #2 2: tbz x1, #2, 3f
b.mi 4f USER(9f, sttr wzr, [x0])
user_ldst 9f, sttrh, wzr, x0, 2 USER(8f, sttr wzr, [x2, #-4])
sub x1, x1, #2 mov x0, #0
4: adds x1, x1, #1 ret
b.mi 5f
user_ldst 9f, sttrb, wzr, x0, 0 3: tbz x1, #1, 4f
USER(9f, sttrh wzr, [x0])
4: tbz x1, #0, 5f
USER(7f, sttrb wzr, [x2, #-1])
5: mov x0, #0 5: mov x0, #0
ret ret
SYM_FUNC_END(__arch_clear_user) SYM_FUNC_END(__arch_clear_user)
@ -45,6 +50,8 @@ EXPORT_SYMBOL(__arch_clear_user)
.section .fixup,"ax" .section .fixup,"ax"
.align 2 .align 2
9: mov x0, x2 // return the original size 7: sub x0, x2, #5 // Adjust for faulting on the final byte...
8: add x0, x0, #4 // ...or the second word of the 4-7 byte case
9: sub x0, x2, x0
ret ret
.previous .previous

View File

@ -7,21 +7,14 @@
*/ */
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/compiler.h> #include <linux/printk.h>
#include <linux/kernel.h> #include <linux/sizes.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/stop_machine.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/uaccess.h>
#include <asm/cacheflush.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/fixmap.h> #include <asm/errno.h>
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/kprobes.h> #include <asm/kprobes.h>
#include <asm/sections.h>
#define AARCH64_INSN_SF_BIT BIT(31) #define AARCH64_INSN_SF_BIT BIT(31)
#define AARCH64_INSN_N_BIT BIT(22) #define AARCH64_INSN_N_BIT BIT(22)
@ -30,7 +23,7 @@
static const int aarch64_insn_encoding_class[] = { static const int aarch64_insn_encoding_class[] = {
AARCH64_INSN_CLS_UNKNOWN, AARCH64_INSN_CLS_UNKNOWN,
AARCH64_INSN_CLS_UNKNOWN, AARCH64_INSN_CLS_UNKNOWN,
AARCH64_INSN_CLS_UNKNOWN, AARCH64_INSN_CLS_SVE,
AARCH64_INSN_CLS_UNKNOWN, AARCH64_INSN_CLS_UNKNOWN,
AARCH64_INSN_CLS_LDST, AARCH64_INSN_CLS_LDST,
AARCH64_INSN_CLS_DP_REG, AARCH64_INSN_CLS_DP_REG,
@ -83,81 +76,6 @@ bool aarch64_insn_is_branch_imm(u32 insn)
aarch64_insn_is_bcond(insn)); aarch64_insn_is_bcond(insn));
} }
static DEFINE_RAW_SPINLOCK(patch_lock);
static bool is_exit_text(unsigned long addr)
{
/* discarded with init text/data */
return system_state < SYSTEM_RUNNING &&
addr >= (unsigned long)__exittext_begin &&
addr < (unsigned long)__exittext_end;
}
static bool is_image_text(unsigned long addr)
{
return core_kernel_text(addr) || is_exit_text(addr);
}
static void __kprobes *patch_map(void *addr, int fixmap)
{
unsigned long uintaddr = (uintptr_t) addr;
bool image = is_image_text(uintaddr);
struct page *page;
if (image)
page = phys_to_page(__pa_symbol(addr));
else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
page = vmalloc_to_page(addr);
else
return addr;
BUG_ON(!page);
return (void *)set_fixmap_offset(fixmap, page_to_phys(page) +
(uintaddr & ~PAGE_MASK));
}
static void __kprobes patch_unmap(int fixmap)
{
clear_fixmap(fixmap);
}
/*
* In ARMv8-A, A64 instructions have a fixed length of 32 bits and are always
* little-endian.
*/
int __kprobes aarch64_insn_read(void *addr, u32 *insnp)
{
int ret;
__le32 val;
ret = copy_from_kernel_nofault(&val, addr, AARCH64_INSN_SIZE);
if (!ret)
*insnp = le32_to_cpu(val);
return ret;
}
static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
{
void *waddr = addr;
unsigned long flags = 0;
int ret;
raw_spin_lock_irqsave(&patch_lock, flags);
waddr = patch_map(addr, FIX_TEXT_POKE0);
ret = copy_to_kernel_nofault(waddr, &insn, AARCH64_INSN_SIZE);
patch_unmap(FIX_TEXT_POKE0);
raw_spin_unlock_irqrestore(&patch_lock, flags);
return ret;
}
int __kprobes aarch64_insn_write(void *addr, u32 insn)
{
return __aarch64_insn_write(addr, cpu_to_le32(insn));
}
bool __kprobes aarch64_insn_uses_literal(u32 insn) bool __kprobes aarch64_insn_uses_literal(u32 insn)
{ {
/* ldr/ldrsw (literal), prfm */ /* ldr/ldrsw (literal), prfm */
@ -187,67 +105,6 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
aarch64_insn_is_bcond(insn); aarch64_insn_is_bcond(insn);
} }
int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
u32 *tp = addr;
int ret;
/* A64 instructions must be word aligned */
if ((uintptr_t)tp & 0x3)
return -EINVAL;
ret = aarch64_insn_write(tp, insn);
if (ret == 0)
__flush_icache_range((uintptr_t)tp,
(uintptr_t)tp + AARCH64_INSN_SIZE);
return ret;
}
struct aarch64_insn_patch {
void **text_addrs;
u32 *new_insns;
int insn_cnt;
atomic_t cpu_count;
};
static int __kprobes aarch64_insn_patch_text_cb(void *arg)
{
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
/* The first CPU becomes master */
if (atomic_inc_return(&pp->cpu_count) == 1) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
/* Notify other processors with an additional increment. */
atomic_inc(&pp->cpu_count);
} else {
while (atomic_read(&pp->cpu_count) <= num_online_cpus())
cpu_relax();
isb();
}
return ret;
}
int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt)
{
struct aarch64_insn_patch patch = {
.text_addrs = addrs,
.new_insns = insns,
.insn_cnt = cnt,
.cpu_count = ATOMIC_INIT(0),
};
if (cnt <= 0)
return -EINVAL;
return stop_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch,
cpu_online_mask);
}
static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type, static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
u32 *maskp, int *shiftp) u32 *maskp, int *shiftp)
{ {
@ -1432,104 +1289,6 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn)
return insn & CRM_MASK; return insn & CRM_MASK;
} }
static bool __kprobes __check_eq(unsigned long pstate)
{
return (pstate & PSR_Z_BIT) != 0;
}
static bool __kprobes __check_ne(unsigned long pstate)
{
return (pstate & PSR_Z_BIT) == 0;
}
static bool __kprobes __check_cs(unsigned long pstate)
{
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_cc(unsigned long pstate)
{
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_mi(unsigned long pstate)
{
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_pl(unsigned long pstate)
{
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_vs(unsigned long pstate)
{
return (pstate & PSR_V_BIT) != 0;
}
static bool __kprobes __check_vc(unsigned long pstate)
{
return (pstate & PSR_V_BIT) == 0;
}
static bool __kprobes __check_hi(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) != 0;
}
static bool __kprobes __check_ls(unsigned long pstate)
{
pstate &= ~(pstate >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */
return (pstate & PSR_C_BIT) == 0;
}
static bool __kprobes __check_ge(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) == 0;
}
static bool __kprobes __check_lt(unsigned long pstate)
{
pstate ^= (pstate << 3); /* PSR_N_BIT ^= PSR_V_BIT */
return (pstate & PSR_N_BIT) != 0;
}
static bool __kprobes __check_gt(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) == 0;
}
static bool __kprobes __check_le(unsigned long pstate)
{
/*PSR_N_BIT ^= PSR_V_BIT */
unsigned long temp = pstate ^ (pstate << 3);
temp |= (pstate << 1); /*PSR_N_BIT |= PSR_Z_BIT */
return (temp & PSR_N_BIT) != 0;
}
static bool __kprobes __check_al(unsigned long pstate)
{
return true;
}
/*
* Note that the ARMv8 ARM calls condition code 0b1111 "nv", but states that
* it behaves identically to 0b1110 ("al").
*/
pstate_check_t * const aarch32_opcode_cond_checks[16] = {
__check_eq, __check_ne, __check_cs, __check_cc,
__check_mi, __check_pl, __check_vs, __check_vc,
__check_hi, __check_ls, __check_ge, __check_lt,
__check_gt, __check_le, __check_al, __check_al
};
static bool range_of_ones(u64 val) static bool range_of_ones(u64 val)
{ {
/* Doesn't handle full ones or full zeroes */ /* Doesn't handle full ones or full zeroes */

View File

@ -0,0 +1,76 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020 Google LLC
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
/*
* Report a tag mismatch detected by tag-based KASAN.
*
* A compiler-generated thunk calls this with a non-AAPCS calling
* convention. Upon entry to this function, registers are as follows:
*
* x0: fault address (see below for restore)
* x1: fault description (see below for restore)
* x2 to x15: callee-saved
* x16 to x17: safe to clobber
* x18 to x30: callee-saved
* sp: pre-decremented by 256 bytes (see below for restore)
*
* The caller has decremented the SP by 256 bytes, and created a
* structure on the stack as follows:
*
* sp + 0..15: x0 and x1 to be restored
* sp + 16..231: free for use
* sp + 232..247: x29 and x30 (same as in GPRs)
* sp + 248..255: free for use
*
* Note that this is not a struct pt_regs.
*
* To call a regular AAPCS function we must save x2 to x15 (which we can
* store in the gaps), and create a frame record (for which we can use
* x29 and x30 spilled by the caller as those match the GPRs).
*
* The caller expects x0 and x1 to be restored from the structure, and
* for the structure to be removed from the stack (i.e. the SP must be
* incremented by 256 prior to return).
*/
SYM_CODE_START(__hwasan_tag_mismatch)
#ifdef BTI_C
BTI_C
#endif
add x29, sp, #232
stp x2, x3, [sp, #8 * 2]
stp x4, x5, [sp, #8 * 4]
stp x6, x7, [sp, #8 * 6]
stp x8, x9, [sp, #8 * 8]
stp x10, x11, [sp, #8 * 10]
stp x12, x13, [sp, #8 * 12]
stp x14, x15, [sp, #8 * 14]
#ifndef CONFIG_SHADOW_CALL_STACK
str x18, [sp, #8 * 18]
#endif
mov x2, x30
bl kasan_tag_mismatch
ldp x0, x1, [sp]
ldp x2, x3, [sp, #8 * 2]
ldp x4, x5, [sp, #8 * 4]
ldp x6, x7, [sp, #8 * 6]
ldp x8, x9, [sp, #8 * 8]
ldp x10, x11, [sp, #8 * 10]
ldp x12, x13, [sp, #8 * 12]
ldp x14, x15, [sp, #8 * 14]
#ifndef CONFIG_SHADOW_CALL_STACK
ldr x18, [sp, #8 * 18]
#endif
ldp x29, x30, [sp, #8 * 29]
/* remove the structure from the stack */
add sp, sp, #256
ret
SYM_CODE_END(__hwasan_tag_mismatch)
EXPORT_SYMBOL(__hwasan_tag_mismatch)

View File

@ -1,9 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* Based on arch/arm/lib/memchr.S * Copyright (C) 2021 Arm Ltd.
*
* Copyright (C) 1995-2000 Russell King
* Copyright (C) 2013 ARM Ltd.
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
@ -19,16 +16,60 @@
* Returns: * Returns:
* x0 - address of first occurrence of 'c' or 0 * x0 - address of first occurrence of 'c' or 0
*/ */
#define L(label) .L ## label
#define REP8_01 0x0101010101010101
#define REP8_7f 0x7f7f7f7f7f7f7f7f
#define srcin x0
#define chrin w1
#define cntin x2
#define result x0
#define wordcnt x3
#define rep01 x4
#define repchr x5
#define cur_word x6
#define cur_byte w6
#define tmp x7
#define tmp2 x8
.p2align 4
nop
SYM_FUNC_START_WEAK_PI(memchr) SYM_FUNC_START_WEAK_PI(memchr)
and w1, w1, #0xff and chrin, chrin, #0xff
1: subs x2, x2, #1 lsr wordcnt, cntin, #3
b.mi 2f cbz wordcnt, L(byte_loop)
ldrb w3, [x0], #1 mov rep01, #REP8_01
cmp w3, w1 mul repchr, x1, rep01
b.ne 1b and cntin, cntin, #7
sub x0, x0, #1 L(word_loop):
ldr cur_word, [srcin], #8
sub wordcnt, wordcnt, #1
eor cur_word, cur_word, repchr
sub tmp, cur_word, rep01
orr tmp2, cur_word, #REP8_7f
bics tmp, tmp, tmp2
b.ne L(found_word)
cbnz wordcnt, L(word_loop)
L(byte_loop):
cbz cntin, L(not_found)
ldrb cur_byte, [srcin], #1
sub cntin, cntin, #1
cmp cur_byte, chrin
b.ne L(byte_loop)
sub srcin, srcin, #1
ret ret
2: mov x0, #0 L(found_word):
CPU_LE( rev tmp, tmp)
clz tmp, tmp
sub tmp, tmp, #64
add result, srcin, tmp, asr #3
ret
L(not_found):
mov result, #0
ret ret
SYM_FUNC_END_PI(memchr) SYM_FUNC_END_PI(memchr)
EXPORT_SYMBOL_NOKASAN(memchr) EXPORT_SYMBOL_NOKASAN(memchr)

View File

@ -1,247 +1,139 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* Copyright (C) 2013 ARM Ltd. * Copyright (c) 2013-2021, Arm Limited.
* Copyright (C) 2013 Linaro.
* *
* This code is based on glibc cortex strings work originally authored by Linaro * Adapted from the original at:
* be found @ * https://github.com/ARM-software/optimized-routines/blob/e823e3abf5f89ecb/string/aarch64/memcmp.S
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
/* /* Assumptions:
* compare memory areas(when two memory areas' offset are different, *
* alignment handled by the hardware) * ARMv8-a, AArch64, unaligned accesses.
* */
* Parameters:
* x0 - const memory area 1 pointer #define L(label) .L ## label
* x1 - const memory area 2 pointer
* x2 - the maximal compare byte length
* Returns:
* x0 - a compare result, maybe less than, equal to, or greater than ZERO
*/
/* Parameters and result. */ /* Parameters and result. */
src1 .req x0 #define src1 x0
src2 .req x1 #define src2 x1
limit .req x2 #define limit x2
result .req x0 #define result w0
/* Internal variables. */ /* Internal variables. */
data1 .req x3 #define data1 x3
data1w .req w3 #define data1w w3
data2 .req x4 #define data1h x4
data2w .req w4 #define data2 x5
has_nul .req x5 #define data2w w5
diff .req x6 #define data2h x6
endloop .req x7 #define tmp1 x7
tmp1 .req x8 #define tmp2 x8
tmp2 .req x9
tmp3 .req x10
pos .req x11
limit_wd .req x12
mask .req x13
SYM_FUNC_START_WEAK_PI(memcmp) SYM_FUNC_START_WEAK_PI(memcmp)
cbz limit, .Lret0 subs limit, limit, 8
eor tmp1, src1, src2 b.lo L(less8)
tst tmp1, #7
b.ne .Lmisaligned8
ands tmp1, src1, #7
b.ne .Lmutual_align
sub limit_wd, limit, #1 /* limit != 0, so no underflow. */
lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */
/*
* The input source addresses are at alignment boundary.
* Directly compare eight bytes each time.
*/
.Lloop_aligned:
ldr data1, [src1], #8
ldr data2, [src2], #8
.Lstart_realigned:
subs limit_wd, limit_wd, #1
eor diff, data1, data2 /* Non-zero if differences found. */
csinv endloop, diff, xzr, cs /* Last Dword or differences. */
cbz endloop, .Lloop_aligned
/* Not reached the limit, must have found a diff. */ ldr data1, [src1], 8
tbz limit_wd, #63, .Lnot_limit ldr data2, [src2], 8
cmp data1, data2
b.ne L(return)
/* Limit % 8 == 0 => the diff is in the last 8 bytes. */ subs limit, limit, 8
ands limit, limit, #7 b.gt L(more16)
b.eq .Lnot_limit
/*
* The remained bytes less than 8. It is needed to extract valid data
* from last eight bytes of the intended memory range.
*/
lsl limit, limit, #3 /* bytes-> bits. */
mov mask, #~0
CPU_BE( lsr mask, mask, limit )
CPU_LE( lsl mask, mask, limit )
bic data1, data1, mask
bic data2, data2, mask
orr diff, diff, mask ldr data1, [src1, limit]
b .Lnot_limit ldr data2, [src2, limit]
b L(return)
.Lmutual_align: L(more16):
/* ldr data1, [src1], 8
* Sources are mutually aligned, but are not currently at an ldr data2, [src2], 8
* alignment boundary. Round down the addresses and then mask off cmp data1, data2
* the bytes that precede the start point. bne L(return)
*/
bic src1, src1, #7
bic src2, src2, #7
ldr data1, [src1], #8
ldr data2, [src2], #8
/*
* We can not add limit with alignment offset(tmp1) here. Since the
* addition probably make the limit overflown.
*/
sub limit_wd, limit, #1/*limit != 0, so no underflow.*/
and tmp3, limit_wd, #7
lsr limit_wd, limit_wd, #3
add tmp3, tmp3, tmp1
add limit_wd, limit_wd, tmp3, lsr #3
add limit, limit, tmp1/* Adjust the limit for the extra. */
lsl tmp1, tmp1, #3/* Bytes beyond alignment -> bits.*/ /* Jump directly to comparing the last 16 bytes for 32 byte (or less)
neg tmp1, tmp1/* Bits to alignment -64. */ strings. */
mov tmp2, #~0 subs limit, limit, 16
/*mask off the non-intended bytes before the start address.*/ b.ls L(last_bytes)
CPU_BE( lsl tmp2, tmp2, tmp1 )/*Big-endian.Early bytes are at MSB*/
/* Little-endian. Early bytes are at LSB. */
CPU_LE( lsr tmp2, tmp2, tmp1 )
orr data1, data1, tmp2 /* We overlap loads between 0-32 bytes at either side of SRC1 when we
orr data2, data2, tmp2 try to align, so limit it only to strings larger than 128 bytes. */
b .Lstart_realigned cmp limit, 96
b.ls L(loop16)
/*src1 and src2 have different alignment offset.*/ /* Align src1 and adjust src2 with bytes not yet done. */
.Lmisaligned8: and tmp1, src1, 15
cmp limit, #8 add limit, limit, tmp1
b.lo .Ltiny8proc /*limit < 8: compare byte by byte*/ sub src1, src1, tmp1
sub src2, src2, tmp1
and tmp1, src1, #7 /* Loop performing 16 bytes per iteration using aligned src1.
neg tmp1, tmp1 Limit is pre-decremented by 16 and must be larger than zero.
add tmp1, tmp1, #8/*valid length in the first 8 bytes of src1*/ Exit if <= 16 bytes left to do or if the data is not equal. */
and tmp2, src2, #7 .p2align 4
neg tmp2, tmp2 L(loop16):
add tmp2, tmp2, #8/*valid length in the first 8 bytes of src2*/ ldp data1, data1h, [src1], 16
subs tmp3, tmp1, tmp2 ldp data2, data2h, [src2], 16
csel pos, tmp1, tmp2, hi /*Choose the maximum.*/ subs limit, limit, 16
ccmp data1, data2, 0, hi
ccmp data1h, data2h, 0, eq
b.eq L(loop16)
sub limit, limit, pos cmp data1, data2
/*compare the proceeding bytes in the first 8 byte segment.*/ bne L(return)
.Ltinycmp: mov data1, data1h
ldrb data1w, [src1], #1 mov data2, data2h
ldrb data2w, [src2], #1 cmp data1, data2
subs pos, pos, #1 bne L(return)
ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */
b.eq .Ltinycmp /* Compare last 1-16 bytes using unaligned access. */
cbnz pos, 1f /*diff occurred before the last byte.*/ L(last_bytes):
add src1, src1, limit
add src2, src2, limit
ldp data1, data1h, [src1]
ldp data2, data2h, [src2]
cmp data1, data2
bne L(return)
mov data1, data1h
mov data2, data2h
cmp data1, data2
/* Compare data bytes and set return value to 0, -1 or 1. */
L(return):
#ifndef __AARCH64EB__
rev data1, data1
rev data2, data2
#endif
cmp data1, data2
L(ret_eq):
cset result, ne
cneg result, result, lo
ret
.p2align 4
/* Compare up to 8 bytes. Limit is [-8..-1]. */
L(less8):
adds limit, limit, 4
b.lo L(less4)
ldr data1w, [src1], 4
ldr data2w, [src2], 4
cmp data1w, data2w cmp data1w, data2w
b.eq .Lstart_align b.ne L(return)
1: sub limit, limit, 4
sub result, data1, data2 L(less4):
adds limit, limit, 4
beq L(ret_eq)
L(byte_loop):
ldrb data1w, [src1], 1
ldrb data2w, [src2], 1
subs limit, limit, 1
ccmp data1w, data2w, 0, ne /* NZCV = 0b0000. */
b.eq L(byte_loop)
sub result, data1w, data2w
ret ret
.Lstart_align:
lsr limit_wd, limit, #3
cbz limit_wd, .Lremain8
ands xzr, src1, #7
b.eq .Lrecal_offset
/*process more leading bytes to make src1 aligned...*/
add src1, src1, tmp3 /*backwards src1 to alignment boundary*/
add src2, src2, tmp3
sub limit, limit, tmp3
lsr limit_wd, limit, #3
cbz limit_wd, .Lremain8
/*load 8 bytes from aligned SRC1..*/
ldr data1, [src1], #8
ldr data2, [src2], #8
subs limit_wd, limit_wd, #1
eor diff, data1, data2 /*Non-zero if differences found.*/
csinv endloop, diff, xzr, ne
cbnz endloop, .Lunequal_proc
/*How far is the current SRC2 from the alignment boundary...*/
and tmp3, tmp3, #7
.Lrecal_offset:/*src1 is aligned now..*/
neg pos, tmp3
.Lloopcmp_proc:
/*
* Divide the eight bytes into two parts. First,backwards the src2
* to an alignment boundary,load eight bytes and compare from
* the SRC2 alignment boundary. If all 8 bytes are equal,then start
* the second part's comparison. Otherwise finish the comparison.
* This special handle can garantee all the accesses are in the
* thread/task space in avoid to overrange access.
*/
ldr data1, [src1,pos]
ldr data2, [src2,pos]
eor diff, data1, data2 /* Non-zero if differences found. */
cbnz diff, .Lnot_limit
/*The second part process*/
ldr data1, [src1], #8
ldr data2, [src2], #8
eor diff, data1, data2 /* Non-zero if differences found. */
subs limit_wd, limit_wd, #1
csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/
cbz endloop, .Lloopcmp_proc
.Lunequal_proc:
cbz diff, .Lremain8
/* There is difference occurred in the latest comparison. */
.Lnot_limit:
/*
* For little endian,reverse the low significant equal bits into MSB,then
* following CLZ can find how many equal bits exist.
*/
CPU_LE( rev diff, diff )
CPU_LE( rev data1, data1 )
CPU_LE( rev data2, data2 )
/*
* The MS-non-zero bit of DIFF marks either the first bit
* that is different, or the end of the significant data.
* Shifting left now will bring the critical information into the
* top bits.
*/
clz pos, diff
lsl data1, data1, pos
lsl data2, data2, pos
/*
* We need to zero-extend (char is unsigned) the value and then
* perform a signed subtraction.
*/
lsr data1, data1, #56
sub result, data1, data2, lsr #56
ret
.Lremain8:
/* Limit % 8 == 0 =>. all data are equal.*/
ands limit, limit, #7
b.eq .Lret0
.Ltiny8proc:
ldrb data1w, [src1], #1
ldrb data2w, [src2], #1
subs limit, limit, #1
ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */
b.eq .Ltiny8proc
sub result, data1, data2
ret
.Lret0:
mov result, #0
ret
SYM_FUNC_END_PI(memcmp) SYM_FUNC_END_PI(memcmp)
EXPORT_SYMBOL_NOKASAN(memcmp) EXPORT_SYMBOL_NOKASAN(memcmp)

View File

@ -1,66 +1,252 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* Copyright (C) 2013 ARM Ltd. * Copyright (c) 2012-2021, Arm Limited.
* Copyright (C) 2013 Linaro.
* *
* This code is based on glibc cortex strings work originally authored by Linaro * Adapted from the original at:
* be found @ * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/memcpy.S
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/cache.h>
/* /* Assumptions:
* Copy a buffer from src to dest (alignment handled by the hardware) *
* ARMv8-a, AArch64, unaligned accesses.
* *
* Parameters:
* x0 - dest
* x1 - src
* x2 - n
* Returns:
* x0 - dest
*/ */
.macro ldrb1 reg, ptr, val
ldrb \reg, [\ptr], \val
.endm
.macro strb1 reg, ptr, val #define L(label) .L ## label
strb \reg, [\ptr], \val
.endm
.macro ldrh1 reg, ptr, val #define dstin x0
ldrh \reg, [\ptr], \val #define src x1
.endm #define count x2
#define dst x3
#define srcend x4
#define dstend x5
#define A_l x6
#define A_lw w6
#define A_h x7
#define B_l x8
#define B_lw w8
#define B_h x9
#define C_l x10
#define C_lw w10
#define C_h x11
#define D_l x12
#define D_h x13
#define E_l x14
#define E_h x15
#define F_l x16
#define F_h x17
#define G_l count
#define G_h dst
#define H_l src
#define H_h srcend
#define tmp1 x14
.macro strh1 reg, ptr, val /* This implementation handles overlaps and supports both memcpy and memmove
strh \reg, [\ptr], \val from a single entry point. It uses unaligned accesses and branchless
.endm sequences to keep the code small, simple and improve performance.
.macro ldr1 reg, ptr, val Copies are split into 3 main cases: small copies of up to 32 bytes, medium
ldr \reg, [\ptr], \val copies of up to 128 bytes, and large copies. The overhead of the overlap
.endm check is negligible since it is only required for large copies.
.macro str1 reg, ptr, val Large copies use a software pipelined loop processing 64 bytes per iteration.
str \reg, [\ptr], \val The destination pointer is 16-byte aligned to minimize unaligned accesses.
.endm The loop tail is handled by always copying 64 bytes from the end.
*/
.macro ldp1 reg1, reg2, ptr, val
ldp \reg1, \reg2, [\ptr], \val
.endm
.macro stp1 reg1, reg2, ptr, val
stp \reg1, \reg2, [\ptr], \val
.endm
SYM_FUNC_START_ALIAS(__memmove)
SYM_FUNC_START_WEAK_ALIAS_PI(memmove)
SYM_FUNC_START_ALIAS(__memcpy) SYM_FUNC_START_ALIAS(__memcpy)
SYM_FUNC_START_WEAK_PI(memcpy) SYM_FUNC_START_WEAK_PI(memcpy)
#include "copy_template.S" add srcend, src, count
add dstend, dstin, count
cmp count, 128
b.hi L(copy_long)
cmp count, 32
b.hi L(copy32_128)
/* Small copies: 0..32 bytes. */
cmp count, 16
b.lo L(copy16)
ldp A_l, A_h, [src]
ldp D_l, D_h, [srcend, -16]
stp A_l, A_h, [dstin]
stp D_l, D_h, [dstend, -16]
ret ret
/* Copy 8-15 bytes. */
L(copy16):
tbz count, 3, L(copy8)
ldr A_l, [src]
ldr A_h, [srcend, -8]
str A_l, [dstin]
str A_h, [dstend, -8]
ret
.p2align 3
/* Copy 4-7 bytes. */
L(copy8):
tbz count, 2, L(copy4)
ldr A_lw, [src]
ldr B_lw, [srcend, -4]
str A_lw, [dstin]
str B_lw, [dstend, -4]
ret
/* Copy 0..3 bytes using a branchless sequence. */
L(copy4):
cbz count, L(copy0)
lsr tmp1, count, 1
ldrb A_lw, [src]
ldrb C_lw, [srcend, -1]
ldrb B_lw, [src, tmp1]
strb A_lw, [dstin]
strb B_lw, [dstin, tmp1]
strb C_lw, [dstend, -1]
L(copy0):
ret
.p2align 4
/* Medium copies: 33..128 bytes. */
L(copy32_128):
ldp A_l, A_h, [src]
ldp B_l, B_h, [src, 16]
ldp C_l, C_h, [srcend, -32]
ldp D_l, D_h, [srcend, -16]
cmp count, 64
b.hi L(copy128)
stp A_l, A_h, [dstin]
stp B_l, B_h, [dstin, 16]
stp C_l, C_h, [dstend, -32]
stp D_l, D_h, [dstend, -16]
ret
.p2align 4
/* Copy 65..128 bytes. */
L(copy128):
ldp E_l, E_h, [src, 32]
ldp F_l, F_h, [src, 48]
cmp count, 96
b.ls L(copy96)
ldp G_l, G_h, [srcend, -64]
ldp H_l, H_h, [srcend, -48]
stp G_l, G_h, [dstend, -64]
stp H_l, H_h, [dstend, -48]
L(copy96):
stp A_l, A_h, [dstin]
stp B_l, B_h, [dstin, 16]
stp E_l, E_h, [dstin, 32]
stp F_l, F_h, [dstin, 48]
stp C_l, C_h, [dstend, -32]
stp D_l, D_h, [dstend, -16]
ret
.p2align 4
/* Copy more than 128 bytes. */
L(copy_long):
/* Use backwards copy if there is an overlap. */
sub tmp1, dstin, src
cbz tmp1, L(copy0)
cmp tmp1, count
b.lo L(copy_long_backwards)
/* Copy 16 bytes and then align dst to 16-byte alignment. */
ldp D_l, D_h, [src]
and tmp1, dstin, 15
bic dst, dstin, 15
sub src, src, tmp1
add count, count, tmp1 /* Count is now 16 too large. */
ldp A_l, A_h, [src, 16]
stp D_l, D_h, [dstin]
ldp B_l, B_h, [src, 32]
ldp C_l, C_h, [src, 48]
ldp D_l, D_h, [src, 64]!
subs count, count, 128 + 16 /* Test and readjust count. */
b.ls L(copy64_from_end)
L(loop64):
stp A_l, A_h, [dst, 16]
ldp A_l, A_h, [src, 16]
stp B_l, B_h, [dst, 32]
ldp B_l, B_h, [src, 32]
stp C_l, C_h, [dst, 48]
ldp C_l, C_h, [src, 48]
stp D_l, D_h, [dst, 64]!
ldp D_l, D_h, [src, 64]!
subs count, count, 64
b.hi L(loop64)
/* Write the last iteration and copy 64 bytes from the end. */
L(copy64_from_end):
ldp E_l, E_h, [srcend, -64]
stp A_l, A_h, [dst, 16]
ldp A_l, A_h, [srcend, -48]
stp B_l, B_h, [dst, 32]
ldp B_l, B_h, [srcend, -32]
stp C_l, C_h, [dst, 48]
ldp C_l, C_h, [srcend, -16]
stp D_l, D_h, [dst, 64]
stp E_l, E_h, [dstend, -64]
stp A_l, A_h, [dstend, -48]
stp B_l, B_h, [dstend, -32]
stp C_l, C_h, [dstend, -16]
ret
.p2align 4
/* Large backwards copy for overlapping copies.
Copy 16 bytes and then align dst to 16-byte alignment. */
L(copy_long_backwards):
ldp D_l, D_h, [srcend, -16]
and tmp1, dstend, 15
sub srcend, srcend, tmp1
sub count, count, tmp1
ldp A_l, A_h, [srcend, -16]
stp D_l, D_h, [dstend, -16]
ldp B_l, B_h, [srcend, -32]
ldp C_l, C_h, [srcend, -48]
ldp D_l, D_h, [srcend, -64]!
sub dstend, dstend, tmp1
subs count, count, 128
b.ls L(copy64_from_start)
L(loop64_backwards):
stp A_l, A_h, [dstend, -16]
ldp A_l, A_h, [srcend, -16]
stp B_l, B_h, [dstend, -32]
ldp B_l, B_h, [srcend, -32]
stp C_l, C_h, [dstend, -48]
ldp C_l, C_h, [srcend, -48]
stp D_l, D_h, [dstend, -64]!
ldp D_l, D_h, [srcend, -64]!
subs count, count, 64
b.hi L(loop64_backwards)
/* Write the last iteration and copy 64 bytes from the start. */
L(copy64_from_start):
ldp G_l, G_h, [src, 48]
stp A_l, A_h, [dstend, -16]
ldp A_l, A_h, [src, 32]
stp B_l, B_h, [dstend, -32]
ldp B_l, B_h, [src, 16]
stp C_l, C_h, [dstend, -48]
ldp C_l, C_h, [src]
stp D_l, D_h, [dstend, -64]
stp G_l, G_h, [dstin, 48]
stp A_l, A_h, [dstin, 32]
stp B_l, B_h, [dstin, 16]
stp C_l, C_h, [dstin]
ret
SYM_FUNC_END_PI(memcpy) SYM_FUNC_END_PI(memcpy)
EXPORT_SYMBOL(memcpy) EXPORT_SYMBOL(memcpy)
SYM_FUNC_END_ALIAS(__memcpy) SYM_FUNC_END_ALIAS(__memcpy)
EXPORT_SYMBOL(__memcpy) EXPORT_SYMBOL(__memcpy)
SYM_FUNC_END_ALIAS_PI(memmove)
EXPORT_SYMBOL(memmove)
SYM_FUNC_END_ALIAS(__memmove)
EXPORT_SYMBOL(__memmove)

View File

@ -1,189 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2013 ARM Ltd.
* Copyright (C) 2013 Linaro.
*
* This code is based on glibc cortex strings work originally authored by Linaro
* be found @
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/cache.h>
/*
* Move a buffer from src to test (alignment handled by the hardware).
* If dest <= src, call memcpy, otherwise copy in reverse order.
*
* Parameters:
* x0 - dest
* x1 - src
* x2 - n
* Returns:
* x0 - dest
*/
dstin .req x0
src .req x1
count .req x2
tmp1 .req x3
tmp1w .req w3
tmp2 .req x4
tmp2w .req w4
tmp3 .req x5
tmp3w .req w5
dst .req x6
A_l .req x7
A_h .req x8
B_l .req x9
B_h .req x10
C_l .req x11
C_h .req x12
D_l .req x13
D_h .req x14
SYM_FUNC_START_ALIAS(__memmove)
SYM_FUNC_START_WEAK_PI(memmove)
cmp dstin, src
b.lo __memcpy
add tmp1, src, count
cmp dstin, tmp1
b.hs __memcpy /* No overlap. */
add dst, dstin, count
add src, src, count
cmp count, #16
b.lo .Ltail15 /*probably non-alignment accesses.*/
ands tmp2, src, #15 /* Bytes to reach alignment. */
b.eq .LSrcAligned
sub count, count, tmp2
/*
* process the aligned offset length to make the src aligned firstly.
* those extra instructions' cost is acceptable. It also make the
* coming accesses are based on aligned address.
*/
tbz tmp2, #0, 1f
ldrb tmp1w, [src, #-1]!
strb tmp1w, [dst, #-1]!
1:
tbz tmp2, #1, 2f
ldrh tmp1w, [src, #-2]!
strh tmp1w, [dst, #-2]!
2:
tbz tmp2, #2, 3f
ldr tmp1w, [src, #-4]!
str tmp1w, [dst, #-4]!
3:
tbz tmp2, #3, .LSrcAligned
ldr tmp1, [src, #-8]!
str tmp1, [dst, #-8]!
.LSrcAligned:
cmp count, #64
b.ge .Lcpy_over64
/*
* Deal with small copies quickly by dropping straight into the
* exit block.
*/
.Ltail63:
/*
* Copy up to 48 bytes of data. At this point we only need the
* bottom 6 bits of count to be accurate.
*/
ands tmp1, count, #0x30
b.eq .Ltail15
cmp tmp1w, #0x20
b.eq 1f
b.lt 2f
ldp A_l, A_h, [src, #-16]!
stp A_l, A_h, [dst, #-16]!
1:
ldp A_l, A_h, [src, #-16]!
stp A_l, A_h, [dst, #-16]!
2:
ldp A_l, A_h, [src, #-16]!
stp A_l, A_h, [dst, #-16]!
.Ltail15:
tbz count, #3, 1f
ldr tmp1, [src, #-8]!
str tmp1, [dst, #-8]!
1:
tbz count, #2, 2f
ldr tmp1w, [src, #-4]!
str tmp1w, [dst, #-4]!
2:
tbz count, #1, 3f
ldrh tmp1w, [src, #-2]!
strh tmp1w, [dst, #-2]!
3:
tbz count, #0, .Lexitfunc
ldrb tmp1w, [src, #-1]
strb tmp1w, [dst, #-1]
.Lexitfunc:
ret
.Lcpy_over64:
subs count, count, #128
b.ge .Lcpy_body_large
/*
* Less than 128 bytes to copy, so handle 64 bytes here and then jump
* to the tail.
*/
ldp A_l, A_h, [src, #-16]
stp A_l, A_h, [dst, #-16]
ldp B_l, B_h, [src, #-32]
ldp C_l, C_h, [src, #-48]
stp B_l, B_h, [dst, #-32]
stp C_l, C_h, [dst, #-48]
ldp D_l, D_h, [src, #-64]!
stp D_l, D_h, [dst, #-64]!
tst count, #0x3f
b.ne .Ltail63
ret
/*
* Critical loop. Start at a new cache line boundary. Assuming
* 64 bytes per line this ensures the entire loop is in one line.
*/
.p2align L1_CACHE_SHIFT
.Lcpy_body_large:
/* pre-load 64 bytes data. */
ldp A_l, A_h, [src, #-16]
ldp B_l, B_h, [src, #-32]
ldp C_l, C_h, [src, #-48]
ldp D_l, D_h, [src, #-64]!
1:
/*
* interlace the load of next 64 bytes data block with store of the last
* loaded 64 bytes data.
*/
stp A_l, A_h, [dst, #-16]
ldp A_l, A_h, [src, #-16]
stp B_l, B_h, [dst, #-32]
ldp B_l, B_h, [src, #-32]
stp C_l, C_h, [dst, #-48]
ldp C_l, C_h, [src, #-48]
stp D_l, D_h, [dst, #-64]!
ldp D_l, D_h, [src, #-64]!
subs count, count, #64
b.ge 1b
stp A_l, A_h, [dst, #-16]
stp B_l, B_h, [dst, #-32]
stp C_l, C_h, [dst, #-48]
stp D_l, D_h, [dst, #-64]!
tst count, #0x3f
b.ne .Ltail63
ret
SYM_FUNC_END_PI(memmove)
EXPORT_SYMBOL(memmove)
SYM_FUNC_END_ALIAS(__memmove)
EXPORT_SYMBOL(__memmove)

View File

@ -36,6 +36,26 @@ SYM_FUNC_START(mte_clear_page_tags)
ret ret
SYM_FUNC_END(mte_clear_page_tags) SYM_FUNC_END(mte_clear_page_tags)
/*
* Zero the page and tags at the same time
*
* Parameters:
* x0 - address to the beginning of the page
*/
SYM_FUNC_START(mte_zero_clear_page_tags)
mrs x1, dczid_el0
and w1, w1, #0xf
mov x2, #4
lsl x1, x2, x1
and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag
1: dc gzva, x0
add x0, x0, x1
tst x0, #(PAGE_SIZE - 1)
b.ne 1b
ret
SYM_FUNC_END(mte_zero_clear_page_tags)
/* /*
* Copy the tags from the source page to the destination one * Copy the tags from the source page to the destination one
* x0 - address of the destination page * x0 - address of the destination page

Some files were not shown because too many files have changed in this diff Show More