forked from Minki/linux
arm64 updates for 5.15:
- Support for 32-bit tasks on asymmetric AArch32 systems (on top of the scheduler changes merged via the tip tree). - More entry.S clean-ups and conversion to C. - MTE updates: allow a preferred tag checking mode to be set per CPU (the overhead of synchronous mode is smaller for some CPUs than others); optimisations for kernel entry/exit path; optionally disable MTE on the kernel command line. - Kselftest improvements for SVE and signal handling, PtrAuth. - Fix unlikely race where a TLBI could use stale ASID on an ASID roll-over (found by inspection). - Miscellaneous fixes: disable trapping of PMSNEVFR_EL1 to higher exception levels; drop unnecessary sigdelsetmask() call in the signal32 handling; remove BUG_ON when failing to allocate SVE state (just signal the process); SYM_CODE annotations. - Other trivial clean-ups: use macros instead of magic numbers, remove redundant returns, typos. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmEuYkoACgkQa9axLQDI XvEWVw/9HSWbccLrQ68ulaqZkL4r6lL2RqvZ2p6fkIRW7bX1JS4UJjWe3+VBg5Ed DQ1A5cHC5ZndQ4gCRsUhcq7IMXBSj3twMzK7yxBk3zh8tbhVrIOONsKMurMw1NyM OmoyTJ01i2ZrkDs0OU3fBlvIHPxBjKbOZqykOJHjrB2rwBSbsyUw2KvpM7ha8DOf O7gKViDrdAhumdIL9rsMvSiIPoJLCxvqeu55c3saVu1JrUR6ENu7lMu3jt4WrfK3 m5gf76IFbgxXvlLiC8RJW7OYaXZ+COb7RA/yP/lK+Y0ug9PwqTpzXDwqvAp8nBIv y7DK0umcBwfDWmwnRO+ZzNPjOGTHnOnjC07WNBPn3v03pMeJ8v8RnvzHkliek31P r6uFWBxWO/O0sBbSpR+4tzgNfir0RkMajwL5pxQCEMoPCucStYQQl8zIeJeJecpT DKIyKzfFw6O59gdhE6dCj2wXH8YmKUoSUPCAXpKGzK/oYVOGVQTZSZjIC++ydFWv AOXz77etPidk3/Tl15Ena7fkkMkxX9UM8dTjOFS64mSWlEyzE6FtfAgm2rIEOaG7 ps6IjVzVves39SC+yry8T2L6gsxPnanRfwKKCWHkovQzNFgs5Qt51Fd5eIeI1jZ0 uEZhd19FN4136QhjWJOeXL/eyj0bv1WLX/mUln95sHnKyf4je9w= =X6Wm -----END PGP SIGNATURE----- Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - Support for 32-bit tasks on asymmetric AArch32 systems (on top of the scheduler changes merged via the tip tree). - More entry.S clean-ups and conversion to C. - MTE updates: allow a preferred tag checking mode to be set per CPU (the overhead of synchronous mode is smaller for some CPUs than others); optimisations for kernel entry/exit path; optionally disable MTE on the kernel command line. - Kselftest improvements for SVE and signal handling, PtrAuth. - Fix unlikely race where a TLBI could use stale ASID on an ASID roll-over (found by inspection). - Miscellaneous fixes: disable trapping of PMSNEVFR_EL1 to higher exception levels; drop unnecessary sigdelsetmask() call in the signal32 handling; remove BUG_ON when failing to allocate SVE state (just signal the process); SYM_CODE annotations. - Other trivial clean-ups: use macros instead of magic numbers, remove redundant returns, typos. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (56 commits) arm64: Do not trap PMSNEVFR_EL1 arm64: mm: fix comment typo of pud_offset_phys() arm64: signal32: Drop pointless call to sigdelsetmask() arm64/sve: Better handle failure to allocate SVE register storage arm64: Document the requirement for SCR_EL3.HCE arm64: head: avoid over-mapping in map_memory arm64/sve: Add a comment documenting the binutils needed for SVE asm arm64/sve: Add some comments for sve_save/load_state() kselftest/arm64: signal: Add a TODO list for signal handling tests kselftest/arm64: signal: Add test case for SVE register state in signals kselftest/arm64: signal: Verify that signals can't change the SVE vector length kselftest/arm64: signal: Check SVE signal frame shows expected vector length kselftest/arm64: signal: Support signal frames with SVE register data kselftest/arm64: signal: Add SVE to the set of features we can check for arm64: replace in_irq() with in_hardirq() kselftest/arm64: pac: Fix skipping of tests on systems without PAC Documentation: arm64: describe asymmetric 32-bit support arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 arm64: Advertise CPUs capable of running 32-bit applications in sysfs ...
This commit is contained in:
commit
57c78a234e
@ -494,6 +494,15 @@ Description: AArch64 CPU registers
|
||||
'identification' directory exposes the CPU ID registers for
|
||||
identifying model and revision of the CPU.
|
||||
|
||||
What: /sys/devices/system/cpu/aarch32_el0
|
||||
Date: May 2021
|
||||
Contact: Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
|
||||
Description: Identifies the subset of CPUs in the system that can execute
|
||||
AArch32 (32-bit ARM) applications. If present, the same format as
|
||||
/sys/devices/system/cpu/{offline,online,possible,present} is used.
|
||||
If absent, then all or none of the CPUs can execute AArch32
|
||||
applications and execve() will behave accordingly.
|
||||
|
||||
What: /sys/devices/system/cpu/cpu#/cpu_capacity
|
||||
Date: December 2016
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
@ -640,3 +649,20 @@ Description: SPURR ticks for cpuX when it was idle.
|
||||
|
||||
This sysfs interface exposes the number of SPURR ticks
|
||||
for cpuX when it was idle.
|
||||
|
||||
What: /sys/devices/system/cpu/cpuX/mte_tcf_preferred
|
||||
Date: July 2021
|
||||
Contact: Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
|
||||
Description: Preferred MTE tag checking mode
|
||||
|
||||
When a user program specifies more than one MTE tag checking
|
||||
mode, this sysfs node is used to specify which mode should
|
||||
be preferred when scheduling a task on that CPU. Possible
|
||||
values:
|
||||
|
||||
================ ==============================================
|
||||
"sync" Prefer synchronous mode
|
||||
"async" Prefer asynchronous mode
|
||||
================ ==============================================
|
||||
|
||||
See also: Documentation/arm64/memory-tagging-extension.rst
|
||||
|
@ -287,6 +287,17 @@
|
||||
do not want to use tracing_snapshot_alloc() as it needs
|
||||
to be done where GFP_KERNEL allocations are allowed.
|
||||
|
||||
allow_mismatched_32bit_el0 [ARM64]
|
||||
Allow execve() of 32-bit applications and setting of the
|
||||
PER_LINUX32 personality on systems where only a strict
|
||||
subset of the CPUs support 32-bit EL0. When this
|
||||
parameter is present, the set of CPUs supporting 32-bit
|
||||
EL0 is indicated by /sys/devices/system/cpu/aarch32_el0
|
||||
and hot-unplug operations may be restricted.
|
||||
|
||||
See Documentation/arm64/asymmetric-32bit.rst for more
|
||||
information.
|
||||
|
||||
amd_iommu= [HW,X86-64]
|
||||
Pass parameters to the AMD IOMMU driver in the system.
|
||||
Possible values are:
|
||||
@ -380,6 +391,9 @@
|
||||
arm64.nopauth [ARM64] Unconditionally disable Pointer Authentication
|
||||
support
|
||||
|
||||
arm64.nomte [ARM64] Unconditionally disable Memory Tagging Extension
|
||||
support
|
||||
|
||||
ataflop= [HW,M68k]
|
||||
|
||||
atarimouse= [HW,MOUSE] Atari Mouse
|
||||
|
155
Documentation/arm64/asymmetric-32bit.rst
Normal file
155
Documentation/arm64/asymmetric-32bit.rst
Normal file
@ -0,0 +1,155 @@
|
||||
======================
|
||||
Asymmetric 32-bit SoCs
|
||||
======================
|
||||
|
||||
Author: Will Deacon <will@kernel.org>
|
||||
|
||||
This document describes the impact of asymmetric 32-bit SoCs on the
|
||||
execution of 32-bit (``AArch32``) applications.
|
||||
|
||||
Date: 2021-05-17
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
Some Armv9 SoCs suffer from a big.LITTLE misfeature where only a subset
|
||||
of the CPUs are capable of executing 32-bit user applications. On such
|
||||
a system, Linux by default treats the asymmetry as a "mismatch" and
|
||||
disables support for both the ``PER_LINUX32`` personality and
|
||||
``execve(2)`` of 32-bit ELF binaries, with the latter returning
|
||||
``-ENOEXEC``. If the mismatch is detected during late onlining of a
|
||||
64-bit-only CPU, then the onlining operation fails and the new CPU is
|
||||
unavailable for scheduling.
|
||||
|
||||
Surprisingly, these SoCs have been produced with the intention of
|
||||
running legacy 32-bit binaries. Unsurprisingly, that doesn't work very
|
||||
well with the default behaviour of Linux.
|
||||
|
||||
It seems inevitable that future SoCs will drop 32-bit support
|
||||
altogether, so if you're stuck in the unenviable position of needing to
|
||||
run 32-bit code on one of these transitionary platforms then you would
|
||||
be wise to consider alternatives such as recompilation, emulation or
|
||||
retirement. If neither of those options are practical, then read on.
|
||||
|
||||
Enabling kernel support
|
||||
=======================
|
||||
|
||||
Since the kernel support is not completely transparent to userspace,
|
||||
allowing 32-bit tasks to run on an asymmetric 32-bit system requires an
|
||||
explicit "opt-in" and can be enabled by passing the
|
||||
``allow_mismatched_32bit_el0`` parameter on the kernel command-line.
|
||||
|
||||
For the remainder of this document we will refer to an *asymmetric
|
||||
system* to mean an asymmetric 32-bit SoC running Linux with this kernel
|
||||
command-line option enabled.
|
||||
|
||||
Userspace impact
|
||||
================
|
||||
|
||||
32-bit tasks running on an asymmetric system behave in mostly the same
|
||||
way as on a homogeneous system, with a few key differences relating to
|
||||
CPU affinity.
|
||||
|
||||
sysfs
|
||||
-----
|
||||
|
||||
The subset of CPUs capable of running 32-bit tasks is described in
|
||||
``/sys/devices/system/cpu/aarch32_el0`` and is documented further in
|
||||
``Documentation/ABI/testing/sysfs-devices-system-cpu``.
|
||||
|
||||
**Note:** CPUs are advertised by this file as they are detected and so
|
||||
late-onlining of 32-bit-capable CPUs can result in the file contents
|
||||
being modified by the kernel at runtime. Once advertised, CPUs are never
|
||||
removed from the file.
|
||||
|
||||
``execve(2)``
|
||||
-------------
|
||||
|
||||
On a homogeneous system, the CPU affinity of a task is preserved across
|
||||
``execve(2)``. This is not always possible on an asymmetric system,
|
||||
specifically when the new program being executed is 32-bit yet the
|
||||
affinity mask contains 64-bit-only CPUs. In this situation, the kernel
|
||||
determines the new affinity mask as follows:
|
||||
|
||||
1. If the 32-bit-capable subset of the affinity mask is not empty,
|
||||
then the affinity is restricted to that subset and the old affinity
|
||||
mask is saved. This saved mask is inherited over ``fork(2)`` and
|
||||
preserved across ``execve(2)`` of 32-bit programs.
|
||||
|
||||
**Note:** This step does not apply to ``SCHED_DEADLINE`` tasks.
|
||||
See `SCHED_DEADLINE`_.
|
||||
|
||||
2. Otherwise, the cpuset hierarchy of the task is walked until an
|
||||
ancestor is found containing at least one 32-bit-capable CPU. The
|
||||
affinity of the task is then changed to match the 32-bit-capable
|
||||
subset of the cpuset determined by the walk.
|
||||
|
||||
3. On failure (i.e. out of memory), the affinity is changed to the set
|
||||
of all 32-bit-capable CPUs of which the kernel is aware.
|
||||
|
||||
A subsequent ``execve(2)`` of a 64-bit program by the 32-bit task will
|
||||
invalidate the affinity mask saved in (1) and attempt to restore the CPU
|
||||
affinity of the task using the saved mask if it was previously valid.
|
||||
This restoration may fail due to intervening changes to the deadline
|
||||
policy or cpuset hierarchy, in which case the ``execve(2)`` continues
|
||||
with the affinity unchanged.
|
||||
|
||||
Calls to ``sched_setaffinity(2)`` for a 32-bit task will consider only
|
||||
the 32-bit-capable CPUs of the requested affinity mask. On success, the
|
||||
affinity for the task is updated and any saved mask from a prior
|
||||
``execve(2)`` is invalidated.
|
||||
|
||||
``SCHED_DEADLINE``
|
||||
------------------
|
||||
|
||||
Explicit admission of a 32-bit deadline task to the default root domain
|
||||
(e.g. by calling ``sched_setattr(2)``) is rejected on an asymmetric
|
||||
32-bit system unless admission control is disabled by writing -1 to
|
||||
``/proc/sys/kernel/sched_rt_runtime_us``.
|
||||
|
||||
``execve(2)`` of a 32-bit program from a 64-bit deadline task will
|
||||
return ``-ENOEXEC`` if the root domain for the task contains any
|
||||
64-bit-only CPUs and admission control is enabled. Concurrent offlining
|
||||
of 32-bit-capable CPUs may still necessitate the procedure described in
|
||||
`execve(2)`_, in which case step (1) is skipped and a warning is
|
||||
emitted on the console.
|
||||
|
||||
**Note:** It is recommended that a set of 32-bit-capable CPUs are placed
|
||||
into a separate root domain if ``SCHED_DEADLINE`` is to be used with
|
||||
32-bit tasks on an asymmetric system. Failure to do so is likely to
|
||||
result in missed deadlines.
|
||||
|
||||
Cpusets
|
||||
-------
|
||||
|
||||
The affinity of a 32-bit task on an asymmetric system may include CPUs
|
||||
that are not explicitly allowed by the cpuset to which it is attached.
|
||||
This can occur as a result of the following two situations:
|
||||
|
||||
- A 64-bit task attached to a cpuset which allows only 64-bit CPUs
|
||||
executes a 32-bit program.
|
||||
|
||||
- All of the 32-bit-capable CPUs allowed by a cpuset containing a
|
||||
32-bit task are offlined.
|
||||
|
||||
In both of these cases, the new affinity is calculated according to step
|
||||
(2) of the process described in `execve(2)`_ and the cpuset hierarchy is
|
||||
unchanged irrespective of the cgroup version.
|
||||
|
||||
CPU hotplug
|
||||
-----------
|
||||
|
||||
On an asymmetric system, the first detected 32-bit-capable CPU is
|
||||
prevented from being offlined by userspace and any such attempt will
|
||||
return ``-EPERM``. Note that suspend is still permitted even if the
|
||||
primary CPU (i.e. CPU 0) is 64-bit-only.
|
||||
|
||||
KVM
|
||||
---
|
||||
|
||||
Although KVM will not advertise 32-bit EL0 support to any vCPUs on an
|
||||
asymmetric system, a broken guest at EL1 could still attempt to execute
|
||||
32-bit code at EL0. In this case, an exit from a vCPU thread in 32-bit
|
||||
mode will return to host userspace with an ``exit_reason`` of
|
||||
``KVM_EXIT_FAIL_ENTRY`` and will remain non-runnable until successfully
|
||||
re-initialised by a subsequent ``KVM_ARM_VCPU_INIT`` operation.
|
@ -207,10 +207,17 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
software at a higher exception level to prevent execution in an UNKNOWN
|
||||
state.
|
||||
|
||||
- SCR_EL3.FIQ must have the same value across all CPUs the kernel is
|
||||
executing on.
|
||||
- The value of SCR_EL3.FIQ must be the same as the one present at boot
|
||||
time whenever the kernel is executing.
|
||||
For all systems:
|
||||
- If EL3 is present:
|
||||
|
||||
- SCR_EL3.FIQ must have the same value across all CPUs the kernel is
|
||||
executing on.
|
||||
- The value of SCR_EL3.FIQ must be the same as the one present at boot
|
||||
time whenever the kernel is executing.
|
||||
|
||||
- If EL3 is present and the kernel is entered at EL2:
|
||||
|
||||
- SCR_EL3.HCE (bit 8) must be initialised to 0b1.
|
||||
|
||||
For systems with a GICv3 interrupt controller to be used in v3 mode:
|
||||
- If EL3 is present:
|
||||
@ -311,6 +318,28 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
- ZCR_EL2.LEN must be initialised to the same value for all CPUs the
|
||||
kernel will execute on.
|
||||
|
||||
For CPUs with the Scalable Matrix Extension (FEAT_SME):
|
||||
|
||||
- If EL3 is present:
|
||||
|
||||
- CPTR_EL3.ESM (bit 12) must be initialised to 0b1.
|
||||
|
||||
- SCR_EL3.EnTP2 (bit 41) must be initialised to 0b1.
|
||||
|
||||
- SMCR_EL3.LEN must be initialised to the same value for all CPUs the
|
||||
kernel will execute on.
|
||||
|
||||
- If the kernel is entered at EL1 and EL2 is present:
|
||||
|
||||
- CPTR_EL2.TSM (bit 12) must be initialised to 0b0.
|
||||
|
||||
- CPTR_EL2.SMEN (bits 25:24) must be initialised to 0b11.
|
||||
|
||||
- SCTLR_EL2.EnTP2 (bit 60) must be initialised to 0b1.
|
||||
|
||||
- SMCR_EL2.LEN must be initialised to the same value for all CPUs the
|
||||
kernel will execute on.
|
||||
|
||||
The requirements described above for CPU mode, caches, MMUs, architected
|
||||
timers, coherency and system registers apply to all CPUs. All CPUs must
|
||||
enter the kernel in the same exception level. Where the values documented
|
||||
|
@ -10,6 +10,7 @@ ARM64 Architecture
|
||||
acpi_object_usage
|
||||
amu
|
||||
arm-acpi
|
||||
asymmetric-32bit
|
||||
booting
|
||||
cpu-feature-registers
|
||||
elf_hwcaps
|
||||
|
@ -77,14 +77,20 @@ configurable behaviours:
|
||||
address is unknown).
|
||||
|
||||
The user can select the above modes, per thread, using the
|
||||
``prctl(PR_SET_TAGGED_ADDR_CTRL, flags, 0, 0, 0)`` system call where
|
||||
``flags`` contain one of the following values in the ``PR_MTE_TCF_MASK``
|
||||
``prctl(PR_SET_TAGGED_ADDR_CTRL, flags, 0, 0, 0)`` system call where ``flags``
|
||||
contains any number of the following values in the ``PR_MTE_TCF_MASK``
|
||||
bit-field:
|
||||
|
||||
- ``PR_MTE_TCF_NONE`` - *Ignore* tag check faults
|
||||
- ``PR_MTE_TCF_NONE`` - *Ignore* tag check faults
|
||||
(ignored if combined with other options)
|
||||
- ``PR_MTE_TCF_SYNC`` - *Synchronous* tag check fault mode
|
||||
- ``PR_MTE_TCF_ASYNC`` - *Asynchronous* tag check fault mode
|
||||
|
||||
If no modes are specified, tag check faults are ignored. If a single
|
||||
mode is specified, the program will run in that mode. If multiple
|
||||
modes are specified, the mode is selected as described in the "Per-CPU
|
||||
preferred tag checking modes" section below.
|
||||
|
||||
The current tag check fault mode can be read using the
|
||||
``prctl(PR_GET_TAGGED_ADDR_CTRL, 0, 0, 0, 0)`` system call.
|
||||
|
||||
@ -120,13 +126,39 @@ in the ``PR_MTE_TAG_MASK`` bit-field.
|
||||
interface provides an include mask. An include mask of ``0`` (exclusion
|
||||
mask ``0xffff``) results in the CPU always generating tag ``0``.
|
||||
|
||||
Per-CPU preferred tag checking mode
|
||||
-----------------------------------
|
||||
|
||||
On some CPUs the performance of MTE in stricter tag checking modes
|
||||
is similar to that of less strict tag checking modes. This makes it
|
||||
worthwhile to enable stricter checks on those CPUs when a less strict
|
||||
checking mode is requested, in order to gain the error detection
|
||||
benefits of the stricter checks without the performance downsides. To
|
||||
support this scenario, a privileged user may configure a stricter
|
||||
tag checking mode as the CPU's preferred tag checking mode.
|
||||
|
||||
The preferred tag checking mode for each CPU is controlled by
|
||||
``/sys/devices/system/cpu/cpu<N>/mte_tcf_preferred``, to which a
|
||||
privileged user may write the value ``async`` or ``sync``. The default
|
||||
preferred mode for each CPU is ``async``.
|
||||
|
||||
To allow a program to potentially run in the CPU's preferred tag
|
||||
checking mode, the user program may set multiple tag check fault mode
|
||||
bits in the ``flags`` argument to the ``prctl(PR_SET_TAGGED_ADDR_CTRL,
|
||||
flags, 0, 0, 0)`` system call. If the CPU's preferred tag checking
|
||||
mode is in the task's set of provided tag checking modes (this will
|
||||
always be the case at present because the kernel only supports two
|
||||
tag checking modes, but future kernels may support more modes), that
|
||||
mode will be selected. Otherwise, one of the modes in the task's mode
|
||||
set will be selected in a currently unspecified manner.
|
||||
|
||||
Initial process state
|
||||
---------------------
|
||||
|
||||
On ``execve()``, the new process has the following configuration:
|
||||
|
||||
- ``PR_TAGGED_ADDR_ENABLE`` set to 0 (disabled)
|
||||
- Tag checking mode set to ``PR_MTE_TCF_NONE``
|
||||
- No tag checking modes are selected (tag check faults ignored)
|
||||
- ``PR_MTE_TAG_MASK`` set to 0 (all tags excluded)
|
||||
- ``PSTATE.TCO`` set to 0
|
||||
- ``PROT_MTE`` not set on any of the initial memory maps
|
||||
@ -251,11 +283,13 @@ Example of correct usage
|
||||
return EXIT_FAILURE;
|
||||
|
||||
/*
|
||||
* Enable the tagged address ABI, synchronous MTE tag check faults and
|
||||
* allow all non-zero tags in the randomly generated set.
|
||||
* Enable the tagged address ABI, synchronous or asynchronous MTE
|
||||
* tag check faults (based on per-CPU preference) and allow all
|
||||
* non-zero tags in the randomly generated set.
|
||||
*/
|
||||
if (prctl(PR_SET_TAGGED_ADDR_CTRL,
|
||||
PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC | (0xfffe << PR_MTE_TAG_SHIFT),
|
||||
PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC |
|
||||
(0xfffe << PR_MTE_TAG_SHIFT),
|
||||
0, 0, 0)) {
|
||||
perror("prctl() failed");
|
||||
return EXIT_FAILURE;
|
||||
|
@ -157,8 +157,11 @@ Image: vmlinux
|
||||
Image.%: Image
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
|
||||
|
||||
zinstall install:
|
||||
$(Q)$(MAKE) $(build)=$(boot) $@
|
||||
install: install-image := Image
|
||||
zinstall: install-image := Image.gz
|
||||
install zinstall:
|
||||
$(CONFIG_SHELL) $(srctree)/$(boot)/install.sh $(KERNELRELEASE) \
|
||||
$(boot)/$(install-image) System.map "$(INSTALL_PATH)"
|
||||
|
||||
PHONY += vdso_install
|
||||
vdso_install:
|
||||
|
@ -35,11 +35,3 @@ $(obj)/Image.lzma: $(obj)/Image FORCE
|
||||
|
||||
$(obj)/Image.lzo: $(obj)/Image FORCE
|
||||
$(call if_changed,lzo)
|
||||
|
||||
install:
|
||||
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
|
||||
$(obj)/Image System.map "$(INSTALL_PATH)"
|
||||
|
||||
zinstall:
|
||||
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
|
||||
$(obj)/Image.gz System.map "$(INSTALL_PATH)"
|
||||
|
@ -552,7 +552,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
|
||||
u64 mask = GENMASK_ULL(field + 3, field);
|
||||
|
||||
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
|
||||
if (val == 0xf)
|
||||
if (val == ID_AA64DFR0_PMUVER_IMP_DEF)
|
||||
val = 0;
|
||||
|
||||
if (val > cap) {
|
||||
@ -657,7 +657,8 @@ static inline bool system_supports_4kb_granule(void)
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN4_SHIFT);
|
||||
|
||||
return val == ID_AA64MMFR0_TGRAN4_SUPPORTED;
|
||||
return (val >= ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_64kb_granule(void)
|
||||
@ -669,7 +670,8 @@ static inline bool system_supports_64kb_granule(void)
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN64_SHIFT);
|
||||
|
||||
return val == ID_AA64MMFR0_TGRAN64_SUPPORTED;
|
||||
return (val >= ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_16kb_granule(void)
|
||||
@ -681,7 +683,8 @@ static inline bool system_supports_16kb_granule(void)
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN16_SHIFT);
|
||||
|
||||
return val == ID_AA64MMFR0_TGRAN16_SUPPORTED;
|
||||
return (val >= ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_mixed_endian_el0(void)
|
||||
|
@ -149,8 +149,17 @@
|
||||
ubfx x1, x1, #ID_AA64MMFR0_FGT_SHIFT, #4
|
||||
cbz x1, .Lskip_fgt_\@
|
||||
|
||||
msr_s SYS_HDFGRTR_EL2, xzr
|
||||
msr_s SYS_HDFGWTR_EL2, xzr
|
||||
mov x0, xzr
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
|
||||
cmp x1, #3
|
||||
b.lt .Lset_fgt_\@
|
||||
/* Disable PMSNEVFR_EL1 read and write traps */
|
||||
orr x0, x0, #(1 << 62)
|
||||
|
||||
.Lset_fgt_\@:
|
||||
msr_s SYS_HDFGRTR_EL2, x0
|
||||
msr_s SYS_HDFGWTR_EL2, x0
|
||||
msr_s SYS_HFGRTR_EL2, xzr
|
||||
msr_s SYS_HFGWTR_EL2, xzr
|
||||
msr_s SYS_HFGITR_EL2, xzr
|
||||
|
@ -213,10 +213,8 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
|
||||
|
||||
/* AArch32 EABI. */
|
||||
#define EF_ARM_EABI_MASK 0xff000000
|
||||
#define compat_elf_check_arch(x) (system_supports_32bit_el0() && \
|
||||
((x)->e_machine == EM_ARM) && \
|
||||
((x)->e_flags & EF_ARM_EABI_MASK))
|
||||
|
||||
int compat_elf_check_arch(const struct elf32_hdr *);
|
||||
#define compat_elf_check_arch compat_elf_check_arch
|
||||
#define compat_start_thread compat_start_thread
|
||||
/*
|
||||
* Unlike the native SET_PERSONALITY macro, the compat version maintains
|
||||
|
@ -55,8 +55,8 @@ asmlinkage void el0t_32_error_handler(struct pt_regs *regs);
|
||||
|
||||
asmlinkage void call_on_irq_stack(struct pt_regs *regs,
|
||||
void (*func)(struct pt_regs *));
|
||||
asmlinkage void enter_from_user_mode(void);
|
||||
asmlinkage void exit_to_user_mode(void);
|
||||
asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs);
|
||||
|
||||
void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs);
|
||||
void do_undefinstr(struct pt_regs *regs);
|
||||
void do_bti(struct pt_regs *regs);
|
||||
@ -73,6 +73,7 @@ void do_el0_svc(struct pt_regs *regs);
|
||||
void do_el0_svc_compat(struct pt_regs *regs);
|
||||
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
|
||||
void do_serror(struct pt_regs *regs, unsigned int esr);
|
||||
void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags);
|
||||
|
||||
void panic_bad_stack(struct pt_regs *regs, unsigned int esr, unsigned long far);
|
||||
#endif /* __ASM_EXCEPTION_H */
|
||||
|
@ -45,7 +45,6 @@ extern void fpsimd_preserve_current_state(void);
|
||||
extern void fpsimd_restore_current_state(void);
|
||||
extern void fpsimd_update_current_state(struct user_fpsimd_state const *state);
|
||||
|
||||
extern void fpsimd_bind_task_to_cpu(void);
|
||||
extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
|
||||
void *sve_state, unsigned int sve_vl);
|
||||
|
||||
|
@ -94,6 +94,7 @@
|
||||
.endm
|
||||
|
||||
/* SVE instruction encodings for non-SVE-capable assemblers */
|
||||
/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
|
||||
|
||||
/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
|
||||
.macro _sve_str_v nz, nxbase, offset=0
|
||||
|
@ -65,8 +65,8 @@
|
||||
#define EARLY_KASLR (0)
|
||||
#endif
|
||||
|
||||
#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \
|
||||
- ((vstart) >> (shift)) + 1 + EARLY_KASLR)
|
||||
#define EARLY_ENTRIES(vstart, vend, shift) \
|
||||
((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
|
||||
|
||||
#define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
|
||||
|
||||
|
@ -243,9 +243,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
#define arch_enable_tagging_sync() mte_enable_kernel_sync()
|
||||
#define arch_enable_tagging_async() mte_enable_kernel_async()
|
||||
#define arch_set_tagging_report_once(state) mte_set_report_once(state)
|
||||
#define arch_force_async_tag_fault() mte_check_tfsr_exit()
|
||||
#define arch_init_tags(max_tag) mte_init_tags(max_tag)
|
||||
#define arch_get_random_tag() mte_get_random_tag()
|
||||
#define arch_get_mem_tag(addr) mte_get_mem_tag(addr)
|
||||
#define arch_set_mem_tag_range(addr, size, tag, init) \
|
||||
|
@ -27,11 +27,32 @@ typedef struct {
|
||||
} mm_context_t;
|
||||
|
||||
/*
|
||||
* This macro is only used by the TLBI and low-level switch_mm() code,
|
||||
* neither of which can race with an ASID change. We therefore don't
|
||||
* need to reload the counter using atomic64_read().
|
||||
* We use atomic64_read() here because the ASID for an 'mm_struct' can
|
||||
* be reallocated when scheduling one of its threads following a
|
||||
* rollover event (see new_context() and flush_context()). In this case,
|
||||
* a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
|
||||
* may use a stale ASID. This is fine in principle as the new ASID is
|
||||
* guaranteed to be clean in the TLB, but the TLBI routines have to take
|
||||
* care to handle the following race:
|
||||
*
|
||||
* CPU 0 CPU 1 CPU 2
|
||||
*
|
||||
* // ptep_clear_flush(mm)
|
||||
* xchg_relaxed(pte, 0)
|
||||
* DSB ISHST
|
||||
* old = ASID(mm)
|
||||
* | <rollover>
|
||||
* | new = new_context(mm)
|
||||
* \-----------------> atomic_set(mm->context.id, new)
|
||||
* cpu_switch_mm(mm)
|
||||
* // Hardware walk of pte using new ASID
|
||||
* TLBI(old)
|
||||
*
|
||||
* In this scenario, the barrier on CPU 0 and the dependency on CPU 1
|
||||
* ensure that the page-table walker on CPU 1 *must* see the invalid PTE
|
||||
* written by CPU 0.
|
||||
*/
|
||||
#define ASID(mm) ((mm)->context.id.counter & 0xffff)
|
||||
#define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff)
|
||||
|
||||
static inline bool arm64_kernel_unmapped_at_el0(void)
|
||||
{
|
||||
|
@ -231,6 +231,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
||||
update_saved_ttbr0(tsk, next);
|
||||
}
|
||||
|
||||
static inline const struct cpumask *
|
||||
task_cpu_possible_mask(struct task_struct *p)
|
||||
{
|
||||
if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
|
||||
return cpu_possible_mask;
|
||||
|
||||
if (!is_compat_thread(task_thread_info(p)))
|
||||
return cpu_possible_mask;
|
||||
|
||||
return system_32bit_el0_cpumask();
|
||||
}
|
||||
#define task_cpu_possible_mask task_cpu_possible_mask
|
||||
|
||||
void verify_cpu_asid_bits(void);
|
||||
void post_ttbr_update_workaround(void);
|
||||
|
||||
|
@ -130,10 +130,6 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
|
||||
|
||||
void mte_enable_kernel_sync(void);
|
||||
void mte_enable_kernel_async(void);
|
||||
void mte_init_tags(u64 max_tag);
|
||||
|
||||
void mte_set_report_once(bool state);
|
||||
bool mte_report_once(void);
|
||||
|
||||
#else /* CONFIG_ARM64_MTE */
|
||||
|
||||
@ -165,19 +161,6 @@ static inline void mte_enable_kernel_async(void)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mte_init_tags(u64 max_tag)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mte_set_report_once(bool state)
|
||||
{
|
||||
}
|
||||
|
||||
static inline bool mte_report_once(void)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
@ -16,8 +16,6 @@
|
||||
|
||||
#include <asm/pgtable-types.h>
|
||||
|
||||
extern u64 gcr_kernel_excl;
|
||||
|
||||
void mte_clear_page_tags(void *addr);
|
||||
unsigned long mte_copy_tags_from_user(void *to, const void __user *from,
|
||||
unsigned long n);
|
||||
@ -43,7 +41,6 @@ void mte_copy_page_tags(void *kto, const void *kfrom);
|
||||
void mte_thread_init_user(void);
|
||||
void mte_thread_switch(struct task_struct *next);
|
||||
void mte_suspend_enter(void);
|
||||
void mte_suspend_exit(void);
|
||||
long set_mte_ctrl(struct task_struct *task, unsigned long arg);
|
||||
long get_mte_ctrl(struct task_struct *task);
|
||||
int mte_ptrace_copy_tags(struct task_struct *child, long request,
|
||||
@ -72,9 +69,6 @@ static inline void mte_thread_switch(struct task_struct *next)
|
||||
static inline void mte_suspend_enter(void)
|
||||
{
|
||||
}
|
||||
static inline void mte_suspend_exit(void)
|
||||
{
|
||||
}
|
||||
static inline long set_mte_ctrl(struct task_struct *task, unsigned long arg)
|
||||
{
|
||||
return 0;
|
||||
|
@ -715,7 +715,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
|
||||
return (pud_t *)__va(p4d_page_paddr(p4d));
|
||||
}
|
||||
|
||||
/* Find an entry in the frst-level page table. */
|
||||
/* Find an entry in the first-level page table. */
|
||||
#define pud_offset_phys(dir, addr) (p4d_page_paddr(READ_ONCE(*(dir))) + pud_index(addr) * sizeof(pud_t))
|
||||
|
||||
#define pud_set_fixmap(addr) ((pud_t *)set_fixmap_offset(FIX_PUD, addr))
|
||||
|
@ -10,6 +10,9 @@
|
||||
#include <asm/memory.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
#define PR_PAC_ENABLED_KEYS_MASK \
|
||||
(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
|
||||
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
/*
|
||||
* Each key is a 128-bit quantity which is split across a pair of 64-bit
|
||||
@ -117,9 +120,9 @@ static __always_inline void ptrauth_enable(void)
|
||||
\
|
||||
/* enable all keys */ \
|
||||
if (system_supports_address_auth()) \
|
||||
set_task_sctlr_el1(current->thread.sctlr_user | \
|
||||
SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
|
||||
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); \
|
||||
ptrauth_set_enabled_keys(current, \
|
||||
PR_PAC_ENABLED_KEYS_MASK, \
|
||||
PR_PAC_ENABLED_KEYS_MASK); \
|
||||
} while (0)
|
||||
|
||||
#define ptrauth_thread_switch_user(tsk) \
|
||||
@ -146,7 +149,4 @@ static __always_inline void ptrauth_enable(void)
|
||||
#define ptrauth_thread_switch_kernel(tsk)
|
||||
#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
|
||||
|
||||
#define PR_PAC_ENABLED_KEYS_MASK \
|
||||
(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
|
||||
|
||||
#endif /* __ASM_POINTER_AUTH_H */
|
||||
|
@ -16,6 +16,12 @@
|
||||
*/
|
||||
#define NET_IP_ALIGN 0
|
||||
|
||||
#define MTE_CTRL_GCR_USER_EXCL_SHIFT 0
|
||||
#define MTE_CTRL_GCR_USER_EXCL_MASK 0xffff
|
||||
|
||||
#define MTE_CTRL_TCF_SYNC (1UL << 16)
|
||||
#define MTE_CTRL_TCF_ASYNC (1UL << 17)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/build_bug.h>
|
||||
@ -153,7 +159,7 @@ struct thread_struct {
|
||||
#endif
|
||||
#endif
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
u64 gcr_user_excl;
|
||||
u64 mte_ctrl;
|
||||
#endif
|
||||
u64 sctlr_user;
|
||||
};
|
||||
@ -253,7 +259,7 @@ extern void release_thread(struct task_struct *);
|
||||
|
||||
unsigned long get_wchan(struct task_struct *p);
|
||||
|
||||
void set_task_sctlr_el1(u64 sctlr);
|
||||
void update_sctlr_el1(u64 sctlr);
|
||||
|
||||
/* Thread switching */
|
||||
extern struct task_struct *cpu_switch_to(struct task_struct *prev,
|
||||
|
@ -37,7 +37,7 @@ static __must_check inline bool may_use_simd(void)
|
||||
*/
|
||||
return !WARN_ON(!system_capabilities_finalized()) &&
|
||||
system_supports_fpsimd() &&
|
||||
!in_irq() && !irqs_disabled() && !in_nmi() &&
|
||||
!in_hardirq() && !irqs_disabled() && !in_nmi() &&
|
||||
!this_cpu_read(fpsimd_context_busy);
|
||||
}
|
||||
|
||||
|
@ -11,6 +11,7 @@
|
||||
|
||||
#include <linux/bits.h>
|
||||
#include <linux/stringify.h>
|
||||
#include <linux/kasan-tags.h>
|
||||
|
||||
/*
|
||||
* ARMv8 ARM reserves the following encoding for system registers:
|
||||
@ -698,8 +699,7 @@
|
||||
(SCTLR_ELx_M | SCTLR_ELx_C | SCTLR_ELx_SA | SCTLR_EL1_SA0 | \
|
||||
SCTLR_EL1_SED | SCTLR_ELx_I | SCTLR_EL1_DZE | SCTLR_EL1_UCT | \
|
||||
SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
|
||||
SCTLR_ELx_ATA | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI | \
|
||||
SCTLR_EL1_EPAN | SCTLR_EL1_RES1)
|
||||
ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_EPAN | SCTLR_EL1_RES1)
|
||||
|
||||
/* MAIR_ELx memory attributes (used by Linux) */
|
||||
#define MAIR_ATTR_DEVICE_nGnRnE UL(0x00)
|
||||
@ -847,12 +847,16 @@
|
||||
#define ID_AA64MMFR0_ASID_SHIFT 4
|
||||
#define ID_AA64MMFR0_PARANGE_SHIFT 0
|
||||
|
||||
#define ID_AA64MMFR0_TGRAN4_NI 0xf
|
||||
#define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0
|
||||
#define ID_AA64MMFR0_TGRAN64_NI 0xf
|
||||
#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
|
||||
#define ID_AA64MMFR0_TGRAN16_NI 0x0
|
||||
#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
|
||||
#define ID_AA64MMFR0_TGRAN4_NI 0xf
|
||||
#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 0x0
|
||||
#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 0x7
|
||||
#define ID_AA64MMFR0_TGRAN64_NI 0xf
|
||||
#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0
|
||||
#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7
|
||||
#define ID_AA64MMFR0_TGRAN16_NI 0x0
|
||||
#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1
|
||||
#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf
|
||||
|
||||
#define ID_AA64MMFR0_PARANGE_48 0x5
|
||||
#define ID_AA64MMFR0_PARANGE_52 0x6
|
||||
|
||||
@ -1028,16 +1032,16 @@
|
||||
|
||||
#if defined(CONFIG_ARM64_4K_PAGES)
|
||||
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0x7
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX
|
||||
#elif defined(CONFIG_ARM64_16K_PAGES)
|
||||
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0xF
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX
|
||||
#elif defined(CONFIG_ARM64_64K_PAGES)
|
||||
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0x7
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN
|
||||
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX
|
||||
#endif
|
||||
|
||||
#define MVFR2_FPMISC_SHIFT 4
|
||||
@ -1067,6 +1071,21 @@
|
||||
#define SYS_GCR_EL1_RRND (BIT(16))
|
||||
#define SYS_GCR_EL1_EXCL_MASK 0xffffUL
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
/*
|
||||
* KASAN always uses a whole byte for its tags. With CONFIG_KASAN_HW_TAGS it
|
||||
* only uses tags in the range 0xF0-0xFF, which we map to MTE tags 0x0-0xF.
|
||||
*/
|
||||
#define __MTE_TAG_MIN (KASAN_TAG_MIN & 0xf)
|
||||
#define __MTE_TAG_MAX (KASAN_TAG_MAX & 0xf)
|
||||
#define __MTE_TAG_INCL GENMASK(__MTE_TAG_MAX, __MTE_TAG_MIN)
|
||||
#define KERNEL_GCR_EL1_EXCL (SYS_GCR_EL1_EXCL_MASK & ~__MTE_TAG_INCL)
|
||||
#else
|
||||
#define KERNEL_GCR_EL1_EXCL SYS_GCR_EL1_EXCL_MASK
|
||||
#endif
|
||||
|
||||
#define KERNEL_GCR_EL1 (SYS_GCR_EL1_RRND | KERNEL_GCR_EL1_EXCL)
|
||||
|
||||
/* RGSR_EL1 Definitions */
|
||||
#define SYS_RGSR_EL1_TAG_MASK 0xfUL
|
||||
#define SYS_RGSR_EL1_SEED_SHIFT 8
|
||||
|
@ -245,9 +245,10 @@ static inline void flush_tlb_all(void)
|
||||
|
||||
static inline void flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
unsigned long asid = __TLBI_VADDR(0, ASID(mm));
|
||||
unsigned long asid;
|
||||
|
||||
dsb(ishst);
|
||||
asid = __TLBI_VADDR(0, ASID(mm));
|
||||
__tlbi(aside1is, asid);
|
||||
__tlbi_user(aside1is, asid);
|
||||
dsb(ish);
|
||||
@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
|
||||
static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
|
||||
unsigned long uaddr)
|
||||
{
|
||||
unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
|
||||
unsigned long addr;
|
||||
|
||||
dsb(ishst);
|
||||
addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
|
||||
__tlbi(vale1is, addr);
|
||||
__tlbi_user(vale1is, addr);
|
||||
}
|
||||
@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
|
||||
{
|
||||
int num = 0;
|
||||
int scale = 0;
|
||||
unsigned long asid = ASID(vma->vm_mm);
|
||||
unsigned long addr;
|
||||
unsigned long pages;
|
||||
unsigned long asid, addr, pages;
|
||||
|
||||
start = round_down(start, stride);
|
||||
end = round_up(end, stride);
|
||||
@ -305,10 +305,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
|
||||
}
|
||||
|
||||
dsb(ishst);
|
||||
asid = ASID(vma->vm_mm);
|
||||
|
||||
/*
|
||||
* When the CPU does not support TLB range operations, flush the TLB
|
||||
* entries one by one at the granularity of 'stride'. If the the TLB
|
||||
* entries one by one at the granularity of 'stride'. If the TLB
|
||||
* range ops are supported, then:
|
||||
*
|
||||
* 1. If 'pages' is odd, flush the first page through non-range
|
||||
|
@ -52,7 +52,7 @@ int main(void)
|
||||
DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel));
|
||||
#endif
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
DEFINE(THREAD_GCR_EL1_USER, offsetof(struct task_struct, thread.gcr_user_excl));
|
||||
DEFINE(THREAD_MTE_CTRL, offsetof(struct task_struct, thread.mte_ctrl));
|
||||
#endif
|
||||
BLANK();
|
||||
DEFINE(S_X0, offsetof(struct pt_regs, regs[0]));
|
||||
|
@ -67,6 +67,7 @@
|
||||
#include <linux/crash_dump.h>
|
||||
#include <linux/sort.h>
|
||||
#include <linux/stop_machine.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/minmax.h>
|
||||
#include <linux/mm.h>
|
||||
@ -1321,6 +1322,31 @@ const struct cpumask *system_32bit_el0_cpumask(void)
|
||||
return cpu_possible_mask;
|
||||
}
|
||||
|
||||
static int __init parse_32bit_el0_param(char *str)
|
||||
{
|
||||
allow_mismatched_32bit_el0 = true;
|
||||
return 0;
|
||||
}
|
||||
early_param("allow_mismatched_32bit_el0", parse_32bit_el0_param);
|
||||
|
||||
static ssize_t aarch32_el0_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
const struct cpumask *mask = system_32bit_el0_cpumask();
|
||||
|
||||
return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(mask));
|
||||
}
|
||||
static const DEVICE_ATTR_RO(aarch32_el0);
|
||||
|
||||
static int __init aarch32_el0_sysfs_init(void)
|
||||
{
|
||||
if (!allow_mismatched_32bit_el0)
|
||||
return 0;
|
||||
|
||||
return device_create_file(cpu_subsys.dev_root, &dev_attr_aarch32_el0);
|
||||
}
|
||||
device_initcall(aarch32_el0_sysfs_init);
|
||||
|
||||
static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
|
||||
{
|
||||
if (!has_cpuid_feature(entry, scope))
|
||||
@ -1561,8 +1587,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
|
||||
|
||||
if (!cpu)
|
||||
arm64_use_ng_mappings = true;
|
||||
|
||||
return;
|
||||
}
|
||||
#else
|
||||
static void
|
||||
@ -1734,7 +1758,7 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
|
||||
u64 val = read_sysreg_s(SYS_CLIDR_EL1);
|
||||
|
||||
/* Check that CLIDR_EL1.LOU{U,IS} are both 0 */
|
||||
WARN_ON(val & (7 << 27 | 7 << 21));
|
||||
WARN_ON(CLIDR_LOUU(val) || CLIDR_LOUIS(val));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_PAN
|
||||
@ -1843,6 +1867,9 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
|
||||
{
|
||||
sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ATA | SCTLR_EL1_ATA0);
|
||||
isb();
|
||||
|
||||
/*
|
||||
* Clear the tags in the zero page. This needs to be done via the
|
||||
* linear map which has the Tagged attribute.
|
||||
@ -2901,15 +2928,38 @@ void __init setup_cpu_features(void)
|
||||
|
||||
static int enable_mismatched_32bit_el0(unsigned int cpu)
|
||||
{
|
||||
/*
|
||||
* The first 32-bit-capable CPU we detected and so can no longer
|
||||
* be offlined by userspace. -1 indicates we haven't yet onlined
|
||||
* a 32-bit-capable CPU.
|
||||
*/
|
||||
static int lucky_winner = -1;
|
||||
|
||||
struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu);
|
||||
bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0);
|
||||
|
||||
if (cpu_32bit) {
|
||||
cpumask_set_cpu(cpu, cpu_32bit_el0_mask);
|
||||
static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0);
|
||||
setup_elf_hwcaps(compat_elf_hwcaps);
|
||||
}
|
||||
|
||||
if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit)
|
||||
return 0;
|
||||
|
||||
if (lucky_winner >= 0)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* We've detected a mismatch. We need to keep one of our CPUs with
|
||||
* 32-bit EL0 online so that is_cpu_allowed() doesn't end up rejecting
|
||||
* every CPU in the system for a 32-bit task.
|
||||
*/
|
||||
lucky_winner = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask,
|
||||
cpu_active_mask);
|
||||
get_cpu_device(lucky_winner)->offline_disabled = true;
|
||||
setup_elf_hwcaps(compat_elf_hwcaps);
|
||||
pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n",
|
||||
cpu, lucky_winner);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -26,10 +26,14 @@
|
||||
#include <asm/system_misc.h>
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when entering from kernel mode.
|
||||
* Before this function is called it is not safe to call regular kernel code,
|
||||
* intrumentable code, or any code which may trigger an exception.
|
||||
*
|
||||
* This is intended to match the logic in irqentry_enter(), handling the kernel
|
||||
* mode transitions only.
|
||||
*/
|
||||
static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
|
||||
static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs)
|
||||
{
|
||||
regs->exit_rcu = false;
|
||||
|
||||
@ -45,20 +49,26 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
|
||||
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||
rcu_irq_enter_check_tick();
|
||||
trace_hardirqs_off_finish();
|
||||
}
|
||||
|
||||
static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
|
||||
{
|
||||
__enter_from_kernel_mode(regs);
|
||||
mte_check_tfsr_entry();
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when exiting to kernel mode.
|
||||
* After this function returns it is not safe to call regular kernel code,
|
||||
* intrumentable code, or any code which may trigger an exception.
|
||||
*
|
||||
* This is intended to match the logic in irqentry_exit(), handling the kernel
|
||||
* mode transitions only, and with preemption handled elsewhere.
|
||||
*/
|
||||
static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
|
||||
static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
|
||||
{
|
||||
lockdep_assert_irqs_disabled();
|
||||
|
||||
mte_check_tfsr_exit();
|
||||
|
||||
if (interrupts_enabled(regs)) {
|
||||
if (regs->exit_rcu) {
|
||||
trace_hardirqs_on_prepare();
|
||||
@ -75,6 +85,71 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
|
||||
}
|
||||
}
|
||||
|
||||
static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
|
||||
{
|
||||
mte_check_tfsr_exit();
|
||||
__exit_to_kernel_mode(regs);
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when entering from user mode.
|
||||
* Before this function is called it is not safe to call regular kernel code,
|
||||
* intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static __always_inline void __enter_from_user_mode(void)
|
||||
{
|
||||
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||
CT_WARN_ON(ct_state() != CONTEXT_USER);
|
||||
user_exit_irqoff();
|
||||
trace_hardirqs_off_finish();
|
||||
}
|
||||
|
||||
static __always_inline void enter_from_user_mode(struct pt_regs *regs)
|
||||
{
|
||||
__enter_from_user_mode();
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when exiting to user mode.
|
||||
* After this function returns it is not safe to call regular kernel code,
|
||||
* intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static __always_inline void __exit_to_user_mode(void)
|
||||
{
|
||||
trace_hardirqs_on_prepare();
|
||||
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
||||
user_enter_irqoff();
|
||||
lockdep_hardirqs_on(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
static __always_inline void prepare_exit_to_user_mode(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
local_daif_mask();
|
||||
|
||||
flags = READ_ONCE(current_thread_info()->flags);
|
||||
if (unlikely(flags & _TIF_WORK_MASK))
|
||||
do_notify_resume(regs, flags);
|
||||
}
|
||||
|
||||
static __always_inline void exit_to_user_mode(struct pt_regs *regs)
|
||||
{
|
||||
prepare_exit_to_user_mode(regs);
|
||||
mte_check_tfsr_exit();
|
||||
__exit_to_user_mode();
|
||||
}
|
||||
|
||||
asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs)
|
||||
{
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when entering an NMI from user/kernel
|
||||
* mode. Before this function is called it is not safe to call regular kernel
|
||||
* code, intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static void noinstr arm64_enter_nmi(struct pt_regs *regs)
|
||||
{
|
||||
regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
|
||||
@ -88,6 +163,11 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs)
|
||||
ftrace_nmi_enter();
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when exiting an NMI from user/kernel
|
||||
* mode. After this function returns it is not safe to call regular kernel
|
||||
* code, intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static void noinstr arm64_exit_nmi(struct pt_regs *regs)
|
||||
{
|
||||
bool restore = regs->lockdep_hardirqs;
|
||||
@ -105,6 +185,40 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs)
|
||||
__nmi_exit();
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when entering a debug exception from
|
||||
* kernel mode. Before this function is called it is not safe to call regular
|
||||
* kernel code, intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs)
|
||||
{
|
||||
regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
|
||||
|
||||
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||
rcu_nmi_enter();
|
||||
|
||||
trace_hardirqs_off_finish();
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle IRQ/context state management when exiting a debug exception from
|
||||
* kernel mode. After this function returns it is not safe to call regular
|
||||
* kernel code, intrumentable code, or any code which may trigger an exception.
|
||||
*/
|
||||
static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
|
||||
{
|
||||
bool restore = regs->lockdep_hardirqs;
|
||||
|
||||
if (restore) {
|
||||
trace_hardirqs_on_prepare();
|
||||
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
rcu_nmi_exit();
|
||||
if (restore)
|
||||
lockdep_hardirqs_on(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
static void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
|
||||
@ -265,30 +379,6 @@ static void noinstr el1_undef(struct pt_regs *regs)
|
||||
exit_to_kernel_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs)
|
||||
{
|
||||
regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
|
||||
|
||||
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||
rcu_nmi_enter();
|
||||
|
||||
trace_hardirqs_off_finish();
|
||||
}
|
||||
|
||||
static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
|
||||
{
|
||||
bool restore = regs->lockdep_hardirqs;
|
||||
|
||||
if (restore) {
|
||||
trace_hardirqs_on_prepare();
|
||||
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
rcu_nmi_exit();
|
||||
if (restore)
|
||||
lockdep_hardirqs_on(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
unsigned long far = read_sysreg(far_el1);
|
||||
@ -382,31 +472,14 @@ asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
|
||||
arm64_exit_nmi(regs);
|
||||
}
|
||||
|
||||
asmlinkage void noinstr enter_from_user_mode(void)
|
||||
{
|
||||
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||
CT_WARN_ON(ct_state() != CONTEXT_USER);
|
||||
user_exit_irqoff();
|
||||
trace_hardirqs_off_finish();
|
||||
}
|
||||
|
||||
asmlinkage void noinstr exit_to_user_mode(void)
|
||||
{
|
||||
mte_check_tfsr_exit();
|
||||
|
||||
trace_hardirqs_on_prepare();
|
||||
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
||||
user_enter_irqoff();
|
||||
lockdep_hardirqs_on(CALLER_ADDR0);
|
||||
}
|
||||
|
||||
static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
unsigned long far = read_sysreg(far_el1);
|
||||
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_mem_abort(far, esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
|
||||
@ -421,37 +494,42 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
|
||||
if (!is_ttbr0_addr(far))
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_mem_abort(far, esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_fpsimd_acc(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_sve_acc(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_fpsimd_exc(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_sysinstr(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
|
||||
@ -461,37 +539,42 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
|
||||
if (!is_ttbr0_addr(instruction_pointer(regs)))
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_sp_pc_abort(far, esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_sp_pc_abort(regs->sp, esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_undef(struct pt_regs *regs)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_undefinstr(regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_bti(struct pt_regs *regs)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_bti(regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
bad_el0_sync(regs, 0, esr);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
@ -499,23 +582,26 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
|
||||
unsigned long far = read_sysreg(far_el1);
|
||||
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
do_debug_exception(far, esr, regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_svc(struct pt_regs *regs)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
cortex_a76_erratum_1463225_svc_handler();
|
||||
do_el0_svc(regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_ptrauth_fault(regs, esr);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
|
||||
@ -574,7 +660,7 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
|
||||
static void noinstr el0_interrupt(struct pt_regs *regs,
|
||||
void (*handler)(struct pt_regs *))
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
|
||||
write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
|
||||
|
||||
@ -582,6 +668,8 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
|
||||
arm64_apply_bp_hardening();
|
||||
|
||||
do_interrupt_handler(regs, handler);
|
||||
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
|
||||
@ -608,12 +696,13 @@ static void noinstr __el0_error_handler_common(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long esr = read_sysreg(esr_el1);
|
||||
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_ERRCTX);
|
||||
arm64_enter_nmi(regs);
|
||||
do_serror(regs, esr);
|
||||
arm64_exit_nmi(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
|
||||
@ -624,16 +713,18 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
|
||||
#ifdef CONFIG_COMPAT
|
||||
static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_cp15instr(esr, regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
static void noinstr el0_svc_compat(struct pt_regs *regs)
|
||||
{
|
||||
enter_from_user_mode();
|
||||
enter_from_user_mode(regs);
|
||||
cortex_a76_erratum_1463225_svc_handler();
|
||||
do_el0_svc_compat(regs);
|
||||
exit_to_user_mode(regs);
|
||||
}
|
||||
|
||||
asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
|
||||
|
@ -33,11 +33,24 @@ SYM_FUNC_END(fpsimd_load_state)
|
||||
|
||||
#ifdef CONFIG_ARM64_SVE
|
||||
|
||||
/*
|
||||
* Save the SVE state
|
||||
*
|
||||
* x0 - pointer to buffer for state
|
||||
* x1 - pointer to storage for FPSR
|
||||
*/
|
||||
SYM_FUNC_START(sve_save_state)
|
||||
sve_save 0, x1, 2
|
||||
ret
|
||||
SYM_FUNC_END(sve_save_state)
|
||||
|
||||
/*
|
||||
* Load the SVE state
|
||||
*
|
||||
* x0 - pointer to buffer for state
|
||||
* x1 - pointer to storage for FPSR
|
||||
* x2 - VQ-1
|
||||
*/
|
||||
SYM_FUNC_START(sve_load_state)
|
||||
sve_load 0, x1, x2, 3, x4
|
||||
ret
|
||||
|
@ -29,16 +29,6 @@
|
||||
#include <asm/asm-uaccess.h>
|
||||
#include <asm/unistd.h>
|
||||
|
||||
/*
|
||||
* Context tracking and irqflag tracing need to instrument transitions between
|
||||
* user and kernel mode.
|
||||
*/
|
||||
.macro user_enter_irqoff
|
||||
#if defined(CONFIG_CONTEXT_TRACKING) || defined(CONFIG_TRACE_IRQFLAGS)
|
||||
bl exit_to_user_mode
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro clear_gp_regs
|
||||
.irp n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
|
||||
mov x\n, xzr
|
||||
@ -133,42 +123,46 @@ alternative_cb_end
|
||||
.endm
|
||||
|
||||
/* Check for MTE asynchronous tag check faults */
|
||||
.macro check_mte_async_tcf, tmp, ti_flags
|
||||
.macro check_mte_async_tcf, tmp, ti_flags, thread_sctlr
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
.arch_extension lse
|
||||
alternative_if_not ARM64_MTE
|
||||
b 1f
|
||||
alternative_else_nop_endif
|
||||
/*
|
||||
* Asynchronous tag check faults are only possible in ASYNC (2) or
|
||||
* ASYM (3) modes. In each of these modes bit 1 of SCTLR_EL1.TCF0 is
|
||||
* set, so skip the check if it is unset.
|
||||
*/
|
||||
tbz \thread_sctlr, #(SCTLR_EL1_TCF0_SHIFT + 1), 1f
|
||||
mrs_s \tmp, SYS_TFSRE0_EL1
|
||||
tbz \tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f
|
||||
/* Asynchronous TCF occurred for TTBR0 access, set the TI flag */
|
||||
mov \tmp, #_TIF_MTE_ASYNC_FAULT
|
||||
add \ti_flags, tsk, #TSK_TI_FLAGS
|
||||
stset \tmp, [\ti_flags]
|
||||
msr_s SYS_TFSRE0_EL1, xzr
|
||||
1:
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/* Clear the MTE asynchronous tag check faults */
|
||||
.macro clear_mte_async_tcf
|
||||
.macro clear_mte_async_tcf thread_sctlr
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
alternative_if ARM64_MTE
|
||||
/* See comment in check_mte_async_tcf above. */
|
||||
tbz \thread_sctlr, #(SCTLR_EL1_TCF0_SHIFT + 1), 1f
|
||||
dsb ish
|
||||
msr_s SYS_TFSRE0_EL1, xzr
|
||||
1:
|
||||
alternative_else_nop_endif
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro mte_set_gcr, tmp, tmp2
|
||||
.macro mte_set_gcr, mte_ctrl, tmp
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
/*
|
||||
* Calculate and set the exclude mask preserving
|
||||
* the RRND (bit[16]) setting.
|
||||
*/
|
||||
mrs_s \tmp2, SYS_GCR_EL1
|
||||
bfi \tmp2, \tmp, #0, #16
|
||||
msr_s SYS_GCR_EL1, \tmp2
|
||||
ubfx \tmp, \mte_ctrl, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
|
||||
orr \tmp, \tmp, #SYS_GCR_EL1_RRND
|
||||
msr_s SYS_GCR_EL1, \tmp
|
||||
#endif
|
||||
.endm
|
||||
|
||||
@ -177,10 +171,8 @@ alternative_else_nop_endif
|
||||
alternative_if_not ARM64_MTE
|
||||
b 1f
|
||||
alternative_else_nop_endif
|
||||
ldr_l \tmp, gcr_kernel_excl
|
||||
|
||||
mte_set_gcr \tmp, \tmp2
|
||||
isb
|
||||
mov \tmp, KERNEL_GCR_EL1
|
||||
msr_s SYS_GCR_EL1, \tmp
|
||||
1:
|
||||
#endif
|
||||
.endm
|
||||
@ -190,7 +182,7 @@ alternative_else_nop_endif
|
||||
alternative_if_not ARM64_MTE
|
||||
b 1f
|
||||
alternative_else_nop_endif
|
||||
ldr \tmp, [\tsk, #THREAD_GCR_EL1_USER]
|
||||
ldr \tmp, [\tsk, #THREAD_MTE_CTRL]
|
||||
|
||||
mte_set_gcr \tmp, \tmp2
|
||||
1:
|
||||
@ -231,8 +223,8 @@ alternative_else_nop_endif
|
||||
disable_step_tsk x19, x20
|
||||
|
||||
/* Check for asynchronous tag check faults in user space */
|
||||
check_mte_async_tcf x22, x23
|
||||
apply_ssbd 1, x22, x23
|
||||
ldr x0, [tsk, THREAD_SCTLR_USER]
|
||||
check_mte_async_tcf x22, x23, x0
|
||||
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
@ -245,7 +237,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
* was disabled on kernel exit then we would have left the kernel IA
|
||||
* installed so there is no need to install it again.
|
||||
*/
|
||||
ldr x0, [tsk, THREAD_SCTLR_USER]
|
||||
tbz x0, SCTLR_ELx_ENIA_SHIFT, 1f
|
||||
__ptrauth_keys_install_kernel_nosync tsk, x20, x22, x23
|
||||
b 2f
|
||||
@ -254,12 +245,26 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
orr x0, x0, SCTLR_ELx_ENIA
|
||||
msr sctlr_el1, x0
|
||||
2:
|
||||
isb
|
||||
alternative_else_nop_endif
|
||||
#endif
|
||||
|
||||
apply_ssbd 1, x22, x23
|
||||
|
||||
mte_set_kernel_gcr x22, x23
|
||||
|
||||
/*
|
||||
* Any non-self-synchronizing system register updates required for
|
||||
* kernel entry should be placed before this point.
|
||||
*/
|
||||
alternative_if ARM64_MTE
|
||||
isb
|
||||
b 1f
|
||||
alternative_else_nop_endif
|
||||
alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
isb
|
||||
alternative_else_nop_endif
|
||||
1:
|
||||
|
||||
scs_load tsk
|
||||
.else
|
||||
add x21, sp, #PT_REGS_SIZE
|
||||
@ -362,6 +367,10 @@ alternative_else_nop_endif
|
||||
3:
|
||||
scs_save tsk
|
||||
|
||||
/* Ignore asynchronous tag check faults in the uaccess routines */
|
||||
ldr x0, [tsk, THREAD_SCTLR_USER]
|
||||
clear_mte_async_tcf x0
|
||||
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
/*
|
||||
@ -371,7 +380,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
*
|
||||
* No kernel C function calls after this.
|
||||
*/
|
||||
ldr x0, [tsk, THREAD_SCTLR_USER]
|
||||
tbz x0, SCTLR_ELx_ENIA_SHIFT, 1f
|
||||
__ptrauth_keys_install_user tsk, x0, x1, x2
|
||||
b 2f
|
||||
@ -474,18 +482,6 @@ SYM_CODE_END(__swpan_exit_el0)
|
||||
/* GPRs used by entry code */
|
||||
tsk .req x28 // current thread_info
|
||||
|
||||
/*
|
||||
* Interrupt handling.
|
||||
*/
|
||||
.macro gic_prio_kentry_setup, tmp:req
|
||||
#ifdef CONFIG_ARM64_PSEUDO_NMI
|
||||
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||
mov \tmp, #(GIC_PRIO_PSR_I_SET | GIC_PRIO_IRQON)
|
||||
msr_s SYS_ICC_PMR_EL1, \tmp
|
||||
alternative_else_nop_endif
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.text
|
||||
|
||||
/*
|
||||
@ -517,12 +513,13 @@ SYM_CODE_START(vectors)
|
||||
SYM_CODE_END(vectors)
|
||||
|
||||
#ifdef CONFIG_VMAP_STACK
|
||||
SYM_CODE_START_LOCAL(__bad_stack)
|
||||
/*
|
||||
* We detected an overflow in kernel_ventry, which switched to the
|
||||
* overflow stack. Stash the exception regs, and head to our overflow
|
||||
* handler.
|
||||
*/
|
||||
__bad_stack:
|
||||
|
||||
/* Restore the original x0 value */
|
||||
mrs x0, tpidrro_el0
|
||||
|
||||
@ -542,6 +539,7 @@ __bad_stack:
|
||||
/* Time to die */
|
||||
bl handle_bad_stack
|
||||
ASM_BUG()
|
||||
SYM_CODE_END(__bad_stack)
|
||||
#endif /* CONFIG_VMAP_STACK */
|
||||
|
||||
|
||||
@ -585,37 +583,13 @@ SYM_CODE_START_LOCAL(ret_to_kernel)
|
||||
kernel_exit 1
|
||||
SYM_CODE_END(ret_to_kernel)
|
||||
|
||||
/*
|
||||
* "slow" syscall return path.
|
||||
*/
|
||||
SYM_CODE_START_LOCAL(ret_to_user)
|
||||
disable_daif
|
||||
gic_prio_kentry_setup tmp=x3
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
ldr x19, [tsk, #TSK_TI_FLAGS]
|
||||
and x2, x19, #_TIF_WORK_MASK
|
||||
cbnz x2, work_pending
|
||||
finish_ret_to_user:
|
||||
user_enter_irqoff
|
||||
/* Ignore asynchronous tag check faults in the uaccess routines */
|
||||
clear_mte_async_tcf
|
||||
ldr x19, [tsk, #TSK_TI_FLAGS] // re-check for single-step
|
||||
enable_step_tsk x19, x2
|
||||
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
|
||||
bl stackleak_erase
|
||||
#endif
|
||||
kernel_exit 0
|
||||
|
||||
/*
|
||||
* Ok, we need to do extra processing, enter the slow path.
|
||||
*/
|
||||
work_pending:
|
||||
mov x0, sp // 'regs'
|
||||
mov x1, x19
|
||||
bl do_notify_resume
|
||||
ldr x19, [tsk, #TSK_TI_FLAGS] // re-check for single-step
|
||||
b finish_ret_to_user
|
||||
SYM_CODE_END(ret_to_user)
|
||||
|
||||
.popsection // .entry.text
|
||||
@ -781,6 +755,8 @@ SYM_CODE_START(ret_from_fork)
|
||||
mov x0, x20
|
||||
blr x19
|
||||
1: get_current_task tsk
|
||||
mov x0, sp
|
||||
bl asm_exit_to_user_mode
|
||||
b ret_to_user
|
||||
SYM_CODE_END(ret_from_fork)
|
||||
NOKPROBE(ret_from_fork)
|
||||
|
@ -162,6 +162,8 @@ extern void __percpu *efi_sve_state;
|
||||
DEFINE_PER_CPU(bool, fpsimd_context_busy);
|
||||
EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
|
||||
|
||||
static void fpsimd_bind_task_to_cpu(void);
|
||||
|
||||
static void __get_cpu_fpsimd_context(void)
|
||||
{
|
||||
bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
|
||||
@ -518,12 +520,6 @@ void sve_alloc(struct task_struct *task)
|
||||
/* This is a small allocation (maximum ~8KB) and Should Not Fail. */
|
||||
task->thread.sve_state =
|
||||
kzalloc(sve_state_size(task), GFP_KERNEL);
|
||||
|
||||
/*
|
||||
* If future SVE revisions can have larger vectors though,
|
||||
* this may cease to be true:
|
||||
*/
|
||||
BUG_ON(!task->thread.sve_state);
|
||||
}
|
||||
|
||||
|
||||
@ -943,6 +939,10 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
|
||||
}
|
||||
|
||||
sve_alloc(current);
|
||||
if (!current->thread.sve_state) {
|
||||
force_sig(SIGKILL);
|
||||
return;
|
||||
}
|
||||
|
||||
get_cpu_fpsimd_context();
|
||||
|
||||
@ -1112,7 +1112,7 @@ void fpsimd_signal_preserve_current_state(void)
|
||||
* The caller must have ownership of the cpu FPSIMD context before calling
|
||||
* this function.
|
||||
*/
|
||||
void fpsimd_bind_task_to_cpu(void)
|
||||
static void fpsimd_bind_task_to_cpu(void)
|
||||
{
|
||||
struct fpsimd_last_state_struct *last =
|
||||
this_cpu_ptr(&fpsimd_last_state);
|
||||
|
@ -177,7 +177,7 @@ SYM_CODE_END(preserve_boot_args)
|
||||
* to be composed of multiple pages. (This effectively scales the end index).
|
||||
*
|
||||
* vstart: virtual address of start of range
|
||||
* vend: virtual address of end of range
|
||||
* vend: virtual address of end of range - we map [vstart, vend]
|
||||
* shift: shift used to transform virtual address into index
|
||||
* ptrs: number of entries in page table
|
||||
* istart: index in table corresponding to vstart
|
||||
@ -214,17 +214,18 @@ SYM_CODE_END(preserve_boot_args)
|
||||
*
|
||||
* tbl: location of page table
|
||||
* rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE)
|
||||
* vstart: start address to map
|
||||
* vend: end address to map - we map [vstart, vend]
|
||||
* vstart: virtual address of start of range
|
||||
* vend: virtual address of end of range - we map [vstart, vend - 1]
|
||||
* flags: flags to use to map last level entries
|
||||
* phys: physical address corresponding to vstart - physical memory is contiguous
|
||||
* pgds: the number of pgd entries
|
||||
*
|
||||
* Temporaries: istart, iend, tmp, count, sv - these need to be different registers
|
||||
* Preserves: vstart, vend, flags
|
||||
* Corrupts: tbl, rtbl, istart, iend, tmp, count, sv
|
||||
* Preserves: vstart, flags
|
||||
* Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv
|
||||
*/
|
||||
.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
|
||||
sub \vend, \vend, #1
|
||||
add \rtbl, \tbl, #PAGE_SIZE
|
||||
mov \sv, \rtbl
|
||||
mov \count, #0
|
||||
|
@ -54,6 +54,7 @@ static const struct ftr_set_desc pfr1 __initconst = {
|
||||
.override = &id_aa64pfr1_override,
|
||||
.fields = {
|
||||
{ "bt", ID_AA64PFR1_BT_SHIFT },
|
||||
{ "mte", ID_AA64PFR1_MTE_SHIFT},
|
||||
{}
|
||||
},
|
||||
};
|
||||
@ -100,6 +101,7 @@ static const struct {
|
||||
{ "arm64.nopauth",
|
||||
"id_aa64isar1.gpi=0 id_aa64isar1.gpa=0 "
|
||||
"id_aa64isar1.api=0 id_aa64isar1.apa=0" },
|
||||
{ "arm64.nomte", "id_aa64pfr1.mte=0" },
|
||||
{ "nokaslr", "kaslr.disabled=1" },
|
||||
};
|
||||
|
||||
|
@ -4,6 +4,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/prctl.h>
|
||||
@ -22,9 +23,7 @@
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
u64 gcr_kernel_excl __ro_after_init;
|
||||
|
||||
static bool report_fault_once = true;
|
||||
static DEFINE_PER_CPU_READ_MOSTLY(u64, mte_tcf_preferred);
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
/* Whether the MTE asynchronous mode is enabled. */
|
||||
@ -101,26 +100,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
|
||||
return ret;
|
||||
}
|
||||
|
||||
void mte_init_tags(u64 max_tag)
|
||||
{
|
||||
static bool gcr_kernel_excl_initialized;
|
||||
|
||||
if (!gcr_kernel_excl_initialized) {
|
||||
/*
|
||||
* The format of the tags in KASAN is 0xFF and in MTE is 0xF.
|
||||
* This conversion extracts an MTE tag from a KASAN tag.
|
||||
*/
|
||||
u64 incl = GENMASK(FIELD_GET(MTE_TAG_MASK >> MTE_TAG_SHIFT,
|
||||
max_tag), 0);
|
||||
|
||||
gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
|
||||
gcr_kernel_excl_initialized = true;
|
||||
}
|
||||
|
||||
/* Enable the kernel exclude mask for random tags generation. */
|
||||
write_sysreg_s(SYS_GCR_EL1_RRND | gcr_kernel_excl, SYS_GCR_EL1);
|
||||
}
|
||||
|
||||
static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)
|
||||
{
|
||||
/* Enable MTE Sync Mode for EL1. */
|
||||
@ -160,16 +139,6 @@ void mte_enable_kernel_async(void)
|
||||
}
|
||||
#endif
|
||||
|
||||
void mte_set_report_once(bool state)
|
||||
{
|
||||
WRITE_ONCE(report_fault_once, state);
|
||||
}
|
||||
|
||||
bool mte_report_once(void)
|
||||
{
|
||||
return READ_ONCE(report_fault_once);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
void mte_check_tfsr_el1(void)
|
||||
{
|
||||
@ -193,14 +162,26 @@ void mte_check_tfsr_el1(void)
|
||||
}
|
||||
#endif
|
||||
|
||||
static void set_gcr_el1_excl(u64 excl)
|
||||
static void mte_update_sctlr_user(struct task_struct *task)
|
||||
{
|
||||
current->thread.gcr_user_excl = excl;
|
||||
|
||||
/*
|
||||
* SYS_GCR_EL1 will be set to current->thread.gcr_user_excl value
|
||||
* by mte_set_user_gcr() in kernel_exit,
|
||||
* This must be called with preemption disabled and can only be called
|
||||
* on the current or next task since the CPU must match where the thread
|
||||
* is going to run. The caller is responsible for calling
|
||||
* update_sctlr_el1() later in the same preemption disabled block.
|
||||
*/
|
||||
unsigned long sctlr = task->thread.sctlr_user;
|
||||
unsigned long mte_ctrl = task->thread.mte_ctrl;
|
||||
unsigned long pref, resolved_mte_tcf;
|
||||
|
||||
pref = __this_cpu_read(mte_tcf_preferred);
|
||||
resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl;
|
||||
sctlr &= ~SCTLR_EL1_TCF0_MASK;
|
||||
if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC)
|
||||
sctlr |= SCTLR_EL1_TCF0_ASYNC;
|
||||
else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC)
|
||||
sctlr |= SCTLR_EL1_TCF0_SYNC;
|
||||
task->thread.sctlr_user = sctlr;
|
||||
}
|
||||
|
||||
void mte_thread_init_user(void)
|
||||
@ -212,15 +193,14 @@ void mte_thread_init_user(void)
|
||||
dsb(ish);
|
||||
write_sysreg_s(0, SYS_TFSRE0_EL1);
|
||||
clear_thread_flag(TIF_MTE_ASYNC_FAULT);
|
||||
/* disable tag checking */
|
||||
set_task_sctlr_el1((current->thread.sctlr_user & ~SCTLR_EL1_TCF0_MASK) |
|
||||
SCTLR_EL1_TCF0_NONE);
|
||||
/* reset tag generation mask */
|
||||
set_gcr_el1_excl(SYS_GCR_EL1_EXCL_MASK);
|
||||
/* disable tag checking and reset tag generation mask */
|
||||
set_mte_ctrl(current, 0);
|
||||
}
|
||||
|
||||
void mte_thread_switch(struct task_struct *next)
|
||||
{
|
||||
mte_update_sctlr_user(next);
|
||||
|
||||
/*
|
||||
* Check if an async tag exception occurred at EL1.
|
||||
*
|
||||
@ -248,44 +228,25 @@ void mte_suspend_enter(void)
|
||||
mte_check_tfsr_el1();
|
||||
}
|
||||
|
||||
void mte_suspend_exit(void)
|
||||
{
|
||||
if (!system_supports_mte())
|
||||
return;
|
||||
|
||||
sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, gcr_kernel_excl);
|
||||
isb();
|
||||
}
|
||||
|
||||
long set_mte_ctrl(struct task_struct *task, unsigned long arg)
|
||||
{
|
||||
u64 sctlr = task->thread.sctlr_user & ~SCTLR_EL1_TCF0_MASK;
|
||||
u64 gcr_excl = ~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) &
|
||||
SYS_GCR_EL1_EXCL_MASK;
|
||||
u64 mte_ctrl = (~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) &
|
||||
SYS_GCR_EL1_EXCL_MASK) << MTE_CTRL_GCR_USER_EXCL_SHIFT;
|
||||
|
||||
if (!system_supports_mte())
|
||||
return 0;
|
||||
|
||||
switch (arg & PR_MTE_TCF_MASK) {
|
||||
case PR_MTE_TCF_NONE:
|
||||
sctlr |= SCTLR_EL1_TCF0_NONE;
|
||||
break;
|
||||
case PR_MTE_TCF_SYNC:
|
||||
sctlr |= SCTLR_EL1_TCF0_SYNC;
|
||||
break;
|
||||
case PR_MTE_TCF_ASYNC:
|
||||
sctlr |= SCTLR_EL1_TCF0_ASYNC;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
if (arg & PR_MTE_TCF_ASYNC)
|
||||
mte_ctrl |= MTE_CTRL_TCF_ASYNC;
|
||||
if (arg & PR_MTE_TCF_SYNC)
|
||||
mte_ctrl |= MTE_CTRL_TCF_SYNC;
|
||||
|
||||
if (task != current) {
|
||||
task->thread.sctlr_user = sctlr;
|
||||
task->thread.gcr_user_excl = gcr_excl;
|
||||
} else {
|
||||
set_task_sctlr_el1(sctlr);
|
||||
set_gcr_el1_excl(gcr_excl);
|
||||
task->thread.mte_ctrl = mte_ctrl;
|
||||
if (task == current) {
|
||||
preempt_disable();
|
||||
mte_update_sctlr_user(task);
|
||||
update_sctlr_el1(task->thread.sctlr_user);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -294,24 +255,18 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
|
||||
long get_mte_ctrl(struct task_struct *task)
|
||||
{
|
||||
unsigned long ret;
|
||||
u64 incl = ~task->thread.gcr_user_excl & SYS_GCR_EL1_EXCL_MASK;
|
||||
u64 mte_ctrl = task->thread.mte_ctrl;
|
||||
u64 incl = (~mte_ctrl >> MTE_CTRL_GCR_USER_EXCL_SHIFT) &
|
||||
SYS_GCR_EL1_EXCL_MASK;
|
||||
|
||||
if (!system_supports_mte())
|
||||
return 0;
|
||||
|
||||
ret = incl << PR_MTE_TAG_SHIFT;
|
||||
|
||||
switch (task->thread.sctlr_user & SCTLR_EL1_TCF0_MASK) {
|
||||
case SCTLR_EL1_TCF0_NONE:
|
||||
ret |= PR_MTE_TCF_NONE;
|
||||
break;
|
||||
case SCTLR_EL1_TCF0_SYNC:
|
||||
ret |= PR_MTE_TCF_SYNC;
|
||||
break;
|
||||
case SCTLR_EL1_TCF0_ASYNC:
|
||||
if (mte_ctrl & MTE_CTRL_TCF_ASYNC)
|
||||
ret |= PR_MTE_TCF_ASYNC;
|
||||
break;
|
||||
}
|
||||
if (mte_ctrl & MTE_CTRL_TCF_SYNC)
|
||||
ret |= PR_MTE_TCF_SYNC;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -450,3 +405,54 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request,
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t mte_tcf_preferred_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
switch (per_cpu(mte_tcf_preferred, dev->id)) {
|
||||
case MTE_CTRL_TCF_ASYNC:
|
||||
return sysfs_emit(buf, "async\n");
|
||||
case MTE_CTRL_TCF_SYNC:
|
||||
return sysfs_emit(buf, "sync\n");
|
||||
default:
|
||||
return sysfs_emit(buf, "???\n");
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t mte_tcf_preferred_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
u64 tcf;
|
||||
|
||||
if (sysfs_streq(buf, "async"))
|
||||
tcf = MTE_CTRL_TCF_ASYNC;
|
||||
else if (sysfs_streq(buf, "sync"))
|
||||
tcf = MTE_CTRL_TCF_SYNC;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
device_lock(dev);
|
||||
per_cpu(mte_tcf_preferred, dev->id) = tcf;
|
||||
device_unlock(dev);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(mte_tcf_preferred);
|
||||
|
||||
static int register_mte_tcf_preferred_sysctl(void)
|
||||
{
|
||||
unsigned int cpu;
|
||||
|
||||
if (!system_supports_mte())
|
||||
return 0;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
per_cpu(mte_tcf_preferred, cpu) = MTE_CTRL_TCF_ASYNC;
|
||||
device_create_file(get_cpu_device(cpu),
|
||||
&dev_attr_mte_tcf_preferred);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(register_mte_tcf_preferred_sysctl);
|
||||
|
@ -1055,7 +1055,7 @@ static void __armv8pmu_probe_pmu(void *info)
|
||||
dfr0 = read_sysreg(id_aa64dfr0_el1);
|
||||
pmuver = cpuid_feature_extract_unsigned_field(dfr0,
|
||||
ID_AA64DFR0_PMUVER_SHIFT);
|
||||
if (pmuver == 0xf || pmuver == 0)
|
||||
if (pmuver == ID_AA64DFR0_PMUVER_IMP_DEF || pmuver == 0)
|
||||
return;
|
||||
|
||||
cpu_pmu->pmuver = pmuver;
|
||||
|
@ -67,7 +67,7 @@ static u64 arg_to_enxx_mask(unsigned long arg)
|
||||
int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
|
||||
unsigned long enabled)
|
||||
{
|
||||
u64 sctlr = tsk->thread.sctlr_user;
|
||||
u64 sctlr;
|
||||
|
||||
if (!system_supports_address_auth())
|
||||
return -EINVAL;
|
||||
@ -78,12 +78,14 @@ int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
|
||||
if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
|
||||
return -EINVAL;
|
||||
|
||||
preempt_disable();
|
||||
sctlr = tsk->thread.sctlr_user;
|
||||
sctlr &= ~arg_to_enxx_mask(keys);
|
||||
sctlr |= arg_to_enxx_mask(enabled);
|
||||
tsk->thread.sctlr_user = sctlr;
|
||||
if (tsk == current)
|
||||
set_task_sctlr_el1(sctlr);
|
||||
else
|
||||
tsk->thread.sctlr_user = sctlr;
|
||||
update_sctlr_el1(sctlr);
|
||||
preempt_enable();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <linux/mman.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/stddef.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <linux/unistd.h>
|
||||
@ -163,7 +164,7 @@ static void print_pstate(struct pt_regs *regs)
|
||||
u64 pstate = regs->pstate;
|
||||
|
||||
if (compat_user_mode(regs)) {
|
||||
printk("pstate: %08llx (%c%c%c%c %c %s %s %c%c%c)\n",
|
||||
printk("pstate: %08llx (%c%c%c%c %c %s %s %c%c%c %cDIT %cSSBS)\n",
|
||||
pstate,
|
||||
pstate & PSR_AA32_N_BIT ? 'N' : 'n',
|
||||
pstate & PSR_AA32_Z_BIT ? 'Z' : 'z',
|
||||
@ -174,12 +175,14 @@ static void print_pstate(struct pt_regs *regs)
|
||||
pstate & PSR_AA32_E_BIT ? "BE" : "LE",
|
||||
pstate & PSR_AA32_A_BIT ? 'A' : 'a',
|
||||
pstate & PSR_AA32_I_BIT ? 'I' : 'i',
|
||||
pstate & PSR_AA32_F_BIT ? 'F' : 'f');
|
||||
pstate & PSR_AA32_F_BIT ? 'F' : 'f',
|
||||
pstate & PSR_AA32_DIT_BIT ? '+' : '-',
|
||||
pstate & PSR_AA32_SSBS_BIT ? '+' : '-');
|
||||
} else {
|
||||
const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
|
||||
PSR_BTYPE_SHIFT];
|
||||
|
||||
printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO %cTCO BTYPE=%s)\n",
|
||||
printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO %cTCO %cDIT %cSSBS BTYPE=%s)\n",
|
||||
pstate,
|
||||
pstate & PSR_N_BIT ? 'N' : 'n',
|
||||
pstate & PSR_Z_BIT ? 'Z' : 'z',
|
||||
@ -192,6 +195,8 @@ static void print_pstate(struct pt_regs *regs)
|
||||
pstate & PSR_PAN_BIT ? '+' : '-',
|
||||
pstate & PSR_UAO_BIT ? '+' : '-',
|
||||
pstate & PSR_TCO_BIT ? '+' : '-',
|
||||
pstate & PSR_DIT_BIT ? '+' : '-',
|
||||
pstate & PSR_SSBS_BIT ? '+' : '-',
|
||||
btype_str);
|
||||
}
|
||||
}
|
||||
@ -468,16 +473,13 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
|
||||
write_sysreg(val, cntkctl_el1);
|
||||
}
|
||||
|
||||
static void compat_thread_switch(struct task_struct *next)
|
||||
{
|
||||
if (!is_compat_thread(task_thread_info(next)))
|
||||
return;
|
||||
|
||||
if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
|
||||
set_tsk_thread_flag(next, TIF_NOTIFY_RESUME);
|
||||
}
|
||||
|
||||
static void update_sctlr_el1(u64 sctlr)
|
||||
/*
|
||||
* __switch_to() checks current->thread.sctlr_user as an optimisation. Therefore
|
||||
* this function must be called with preemption disabled and the update to
|
||||
* sctlr_user must be made in the same preemption disabled block so that
|
||||
* __switch_to() does not see the variable update before the SCTLR_EL1 one.
|
||||
*/
|
||||
void update_sctlr_el1(u64 sctlr)
|
||||
{
|
||||
/*
|
||||
* EnIA must not be cleared while in the kernel as this is necessary for
|
||||
@ -489,19 +491,6 @@ static void update_sctlr_el1(u64 sctlr)
|
||||
isb();
|
||||
}
|
||||
|
||||
void set_task_sctlr_el1(u64 sctlr)
|
||||
{
|
||||
/*
|
||||
* __switch_to() checks current->thread.sctlr as an
|
||||
* optimisation. Disable preemption so that it does not see
|
||||
* the variable update before the SCTLR_EL1 one.
|
||||
*/
|
||||
preempt_disable();
|
||||
current->thread.sctlr_user = sctlr;
|
||||
update_sctlr_el1(sctlr);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
/*
|
||||
* Thread switching.
|
||||
*/
|
||||
@ -518,7 +507,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
|
||||
ssbs_thread_switch(next);
|
||||
erratum_1418040_thread_switch(prev, next);
|
||||
ptrauth_thread_switch_user(next);
|
||||
compat_thread_switch(next);
|
||||
|
||||
/*
|
||||
* Complete any pending TLB or cache maintenance on this CPU in case
|
||||
@ -579,6 +567,28 @@ unsigned long arch_align_stack(unsigned long sp)
|
||||
return sp & ~0xf;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
int compat_elf_check_arch(const struct elf32_hdr *hdr)
|
||||
{
|
||||
if (!system_supports_32bit_el0())
|
||||
return false;
|
||||
|
||||
if ((hdr)->e_machine != EM_ARM)
|
||||
return false;
|
||||
|
||||
if (!((hdr)->e_flags & EF_ARM_EABI_MASK))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Prevent execve() of a 32-bit program from a deadline task
|
||||
* if the restricted affinity mask would be inadmissible on an
|
||||
* asymmetric system.
|
||||
*/
|
||||
return !static_branch_unlikely(&arm64_mismatched_32bit_el0) ||
|
||||
!dl_task_check_affinity(current, system_32bit_el0_cpumask());
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Called from setup_new_exec() after (COMPAT_)SET_PERSONALITY.
|
||||
*/
|
||||
@ -588,8 +598,20 @@ void arch_setup_new_exec(void)
|
||||
|
||||
if (is_compat_task()) {
|
||||
mmflags = MMCF_AARCH32;
|
||||
|
||||
/*
|
||||
* Restrict the CPU affinity mask for a 32-bit task so that
|
||||
* it contains only 32-bit-capable CPUs.
|
||||
*
|
||||
* From the perspective of the task, this looks similar to
|
||||
* what would happen if the 64-bit-only CPUs were hot-unplugged
|
||||
* at the point of execve(), although we try a bit harder to
|
||||
* honour the cpuset hierarchy.
|
||||
*/
|
||||
if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
|
||||
set_tsk_thread_flag(current, TIF_NOTIFY_RESUME);
|
||||
force_compatible_cpus_allowed_ptr(current);
|
||||
} else if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) {
|
||||
relax_compatible_cpus_allowed_ptr(current);
|
||||
}
|
||||
|
||||
current->mm->context.flags = mmflags;
|
||||
|
@ -845,6 +845,11 @@ static int sve_set(struct task_struct *target,
|
||||
}
|
||||
|
||||
sve_alloc(target);
|
||||
if (!target->thread.sve_state) {
|
||||
ret = -ENOMEM;
|
||||
clear_tsk_thread_flag(target, TIF_SVE);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Ensure target->thread.sve_state is up to date with target's
|
||||
|
@ -290,6 +290,11 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
|
||||
/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
|
||||
|
||||
sve_alloc(current);
|
||||
if (!current->thread.sve_state) {
|
||||
clear_thread_flag(TIF_SVE);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
err = __copy_from_user(current->thread.sve_state,
|
||||
(char __user const *)user->sve +
|
||||
SVE_SIG_REGS_OFFSET,
|
||||
@ -912,21 +917,7 @@ static void do_signal(struct pt_regs *regs)
|
||||
restore_saved_sigmask();
|
||||
}
|
||||
|
||||
static bool cpu_affinity_invalid(struct pt_regs *regs)
|
||||
{
|
||||
if (!compat_user_mode(regs))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* We're preemptible, but a reschedule will cause us to check the
|
||||
* affinity again.
|
||||
*/
|
||||
return !cpumask_test_cpu(raw_smp_processor_id(),
|
||||
system_32bit_el0_cpumask());
|
||||
}
|
||||
|
||||
asmlinkage void do_notify_resume(struct pt_regs *regs,
|
||||
unsigned long thread_flags)
|
||||
void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
|
||||
{
|
||||
do {
|
||||
if (thread_flags & _TIF_NEED_RESCHED) {
|
||||
@ -952,19 +943,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
|
||||
if (thread_flags & _TIF_NOTIFY_RESUME) {
|
||||
tracehook_notify_resume(regs);
|
||||
rseq_handle_notify_resume(NULL, regs);
|
||||
|
||||
/*
|
||||
* If we reschedule after checking the affinity
|
||||
* then we must ensure that TIF_NOTIFY_RESUME
|
||||
* is set so that we check the affinity again.
|
||||
* Since tracehook_notify_resume() clears the
|
||||
* flag, ensure that the compiler doesn't move
|
||||
* it after the affinity check.
|
||||
*/
|
||||
barrier();
|
||||
|
||||
if (cpu_affinity_invalid(regs))
|
||||
force_sig(SIGKILL);
|
||||
}
|
||||
|
||||
if (thread_flags & _TIF_FOREIGN_FPSTATE)
|
||||
|
@ -46,8 +46,6 @@ struct compat_aux_sigframe {
|
||||
unsigned long end_magic;
|
||||
} __attribute__((__aligned__(8)));
|
||||
|
||||
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
|
||||
|
||||
static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set)
|
||||
{
|
||||
compat_sigset_t cset;
|
||||
@ -190,10 +188,8 @@ static int compat_restore_sigframe(struct pt_regs *regs,
|
||||
unsigned long psr;
|
||||
|
||||
err = get_sigset_t(&set, &sf->uc.uc_sigmask);
|
||||
if (err == 0) {
|
||||
sigdelsetmask(&set, ~_BLOCKABLE);
|
||||
if (err == 0)
|
||||
set_current_blocked(&set);
|
||||
}
|
||||
|
||||
__get_user_error(regs->regs[0], &sf->uc.uc_mcontext.arm_r0, err);
|
||||
__get_user_error(regs->regs[1], &sf->uc.uc_mcontext.arm_r1, err);
|
||||
|
@ -76,7 +76,6 @@ void notrace __cpu_suspend_exit(void)
|
||||
spectre_v4_enable_mitigation(NULL);
|
||||
|
||||
/* Restore additional feature-specific configuration */
|
||||
mte_suspend_exit();
|
||||
ptrauth_suspend_exit();
|
||||
}
|
||||
|
||||
|
@ -185,7 +185,7 @@ u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn)
|
||||
break;
|
||||
default:
|
||||
if (aarch64_get_imm_shift_mask(type, &mask, &shift) < 0) {
|
||||
pr_err("aarch64_insn_decode_immediate: unknown immediate encoding %d\n",
|
||||
pr_err("%s: unknown immediate encoding %d\n", __func__,
|
||||
type);
|
||||
return 0;
|
||||
}
|
||||
@ -215,7 +215,7 @@ u32 __kprobes aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
|
||||
break;
|
||||
default:
|
||||
if (aarch64_get_imm_shift_mask(type, &mask, &shift) < 0) {
|
||||
pr_err("aarch64_insn_encode_immediate: unknown immediate encoding %d\n",
|
||||
pr_err("%s: unknown immediate encoding %d\n", __func__,
|
||||
type);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
@ -309,24 +309,11 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
|
||||
static void report_tag_fault(unsigned long addr, unsigned int esr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
static bool reported;
|
||||
bool is_write;
|
||||
|
||||
if (READ_ONCE(reported))
|
||||
return;
|
||||
|
||||
/*
|
||||
* This is used for KASAN tests and assumes that no MTE faults
|
||||
* happened before running the tests.
|
||||
*/
|
||||
if (mte_report_once())
|
||||
WRITE_ONCE(reported, true);
|
||||
|
||||
/*
|
||||
* SAS bits aren't set for all faults reported in EL1, so we can't
|
||||
* find out access size.
|
||||
*/
|
||||
is_write = !!(esr & ESR_ELx_WNR);
|
||||
bool is_write = !!(esr & ESR_ELx_WNR);
|
||||
kasan_report(addr, 0, is_write, regs->pc);
|
||||
}
|
||||
#else
|
||||
|
@ -437,8 +437,7 @@ SYM_FUNC_START(__cpu_setup)
|
||||
mov x10, #MAIR_ATTR_NORMAL_TAGGED
|
||||
bfi mair, x10, #(8 * MT_NORMAL_TAGGED), #8
|
||||
|
||||
/* initialize GCR_EL1: all non-zero tags excluded by default */
|
||||
mov x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK)
|
||||
mov x10, #KERNEL_GCR_EL1
|
||||
msr_s SYS_GCR_EL1, x10
|
||||
|
||||
/*
|
||||
|
15
include/linux/kasan-tags.h
Normal file
15
include/linux/kasan-tags.h
Normal file
@ -0,0 +1,15 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _LINUX_KASAN_TAGS_H
|
||||
#define _LINUX_KASAN_TAGS_H
|
||||
|
||||
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
|
||||
#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
|
||||
#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
|
||||
#else
|
||||
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
|
||||
#endif
|
||||
|
||||
#endif /* LINUX_KASAN_TAGS_H */
|
@ -235,14 +235,15 @@ struct prctl_mm_map {
|
||||
#define PR_GET_TAGGED_ADDR_CTRL 56
|
||||
# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
|
||||
/* MTE tag check fault modes */
|
||||
# define PR_MTE_TCF_SHIFT 1
|
||||
# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT)
|
||||
# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT)
|
||||
# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT)
|
||||
# define PR_MTE_TCF_MASK (3UL << PR_MTE_TCF_SHIFT)
|
||||
# define PR_MTE_TCF_NONE 0
|
||||
# define PR_MTE_TCF_SYNC (1UL << 1)
|
||||
# define PR_MTE_TCF_ASYNC (1UL << 2)
|
||||
# define PR_MTE_TCF_MASK (PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC)
|
||||
/* MTE tag inclusion mask */
|
||||
# define PR_MTE_TAG_SHIFT 3
|
||||
# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT)
|
||||
/* Unused; kept only for source compatibility */
|
||||
# define PR_MTE_TCF_SHIFT 1
|
||||
|
||||
/* Control reclaim behavior when allocating memory */
|
||||
#define PR_SET_IO_FLUSHER 57
|
||||
|
@ -53,7 +53,6 @@ static int kasan_test_init(struct kunit *test)
|
||||
}
|
||||
|
||||
multishot = kasan_save_enable_multi_shot();
|
||||
kasan_set_tagging_report_once(false);
|
||||
fail_data.report_found = false;
|
||||
kunit_add_named_resource(test, NULL, NULL, &resource,
|
||||
"kasan_data", &fail_data);
|
||||
@ -62,7 +61,6 @@ static int kasan_test_init(struct kunit *test)
|
||||
|
||||
static void kasan_test_exit(struct kunit *test)
|
||||
{
|
||||
kasan_set_tagging_report_once(true);
|
||||
kasan_restore_multi_shot(multishot);
|
||||
KUNIT_EXPECT_FALSE(test, fail_data.report_found);
|
||||
}
|
||||
|
@ -142,8 +142,6 @@ void kasan_init_hw_tags_cpu(void)
|
||||
if (kasan_arg == KASAN_ARG_OFF)
|
||||
return;
|
||||
|
||||
hw_init_tags(KASAN_TAG_MAX);
|
||||
|
||||
/*
|
||||
* Enable async mode only when explicitly requested through
|
||||
* the command line.
|
||||
@ -250,12 +248,6 @@ void kasan_free_pages(struct page *page, unsigned int order)
|
||||
|
||||
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
|
||||
|
||||
void kasan_set_tagging_report_once(bool state)
|
||||
{
|
||||
hw_set_tagging_report_once(state);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kasan_set_tagging_report_once);
|
||||
|
||||
void kasan_enable_tagging_sync(void)
|
||||
{
|
||||
hw_enable_tagging_sync();
|
||||
|
@ -3,6 +3,7 @@
|
||||
#define __MM_KASAN_KASAN_H
|
||||
|
||||
#include <linux/kasan.h>
|
||||
#include <linux/kasan-tags.h>
|
||||
#include <linux/kfence.h>
|
||||
#include <linux/stackdepot.h>
|
||||
|
||||
@ -51,16 +52,6 @@ extern bool kasan_flag_async __ro_after_init;
|
||||
|
||||
#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT)
|
||||
|
||||
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
|
||||
#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
|
||||
#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
|
||||
#else
|
||||
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN_GENERIC
|
||||
#define KASAN_FREE_PAGE 0xFF /* page was freed */
|
||||
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
|
||||
@ -299,12 +290,6 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
|
||||
#ifndef arch_enable_tagging_async
|
||||
#define arch_enable_tagging_async()
|
||||
#endif
|
||||
#ifndef arch_init_tags
|
||||
#define arch_init_tags(max_tag)
|
||||
#endif
|
||||
#ifndef arch_set_tagging_report_once
|
||||
#define arch_set_tagging_report_once(state)
|
||||
#endif
|
||||
#ifndef arch_force_async_tag_fault
|
||||
#define arch_force_async_tag_fault()
|
||||
#endif
|
||||
@ -320,8 +305,6 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
|
||||
|
||||
#define hw_enable_tagging_sync() arch_enable_tagging_sync()
|
||||
#define hw_enable_tagging_async() arch_enable_tagging_async()
|
||||
#define hw_init_tags(max_tag) arch_init_tags(max_tag)
|
||||
#define hw_set_tagging_report_once(state) arch_set_tagging_report_once(state)
|
||||
#define hw_force_async_tag_fault() arch_force_async_tag_fault()
|
||||
#define hw_get_random_tag() arch_get_random_tag()
|
||||
#define hw_get_mem_tag(addr) arch_get_mem_tag(addr)
|
||||
@ -332,19 +315,16 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
|
||||
|
||||
#define hw_enable_tagging_sync()
|
||||
#define hw_enable_tagging_async()
|
||||
#define hw_set_tagging_report_once(state)
|
||||
|
||||
#endif /* CONFIG_KASAN_HW_TAGS */
|
||||
|
||||
#if defined(CONFIG_KASAN_HW_TAGS) && IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
|
||||
|
||||
void kasan_set_tagging_report_once(bool state);
|
||||
void kasan_enable_tagging_sync(void);
|
||||
void kasan_force_async_fault(void);
|
||||
|
||||
#else /* CONFIG_KASAN_HW_TAGS || CONFIG_KASAN_KUNIT_TEST */
|
||||
|
||||
static inline void kasan_set_tagging_report_once(bool state) { }
|
||||
static inline void kasan_enable_tagging_sync(void) { }
|
||||
static inline void kasan_force_async_fault(void) { }
|
||||
|
||||
|
2
tools/testing/selftests/arm64/fp/.gitignore
vendored
2
tools/testing/selftests/arm64/fp/.gitignore
vendored
@ -1,5 +1,7 @@
|
||||
fpsimd-test
|
||||
rdvl-sve
|
||||
sve-probe-vls
|
||||
sve-ptrace
|
||||
sve-test
|
||||
vec-syscfg
|
||||
vlset
|
||||
|
@ -1,17 +1,22 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
CFLAGS += -I../../../../../usr/include/
|
||||
TEST_GEN_PROGS := sve-ptrace sve-probe-vls
|
||||
TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress sve-test sve-stress vlset
|
||||
TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg
|
||||
TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \
|
||||
rdvl-sve \
|
||||
sve-test sve-stress \
|
||||
vlset
|
||||
|
||||
all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED)
|
||||
|
||||
fpsimd-test: fpsimd-test.o
|
||||
$(CC) -nostdlib $^ -o $@
|
||||
rdvl-sve: rdvl-sve.o rdvl.o
|
||||
sve-ptrace: sve-ptrace.o sve-ptrace-asm.o
|
||||
sve-probe-vls: sve-probe-vls.o
|
||||
sve-probe-vls: sve-probe-vls.o rdvl.o
|
||||
sve-test: sve-test.o
|
||||
$(CC) -nostdlib $^ -o $@
|
||||
vec-syscfg: vec-syscfg.o rdvl.o
|
||||
vlset: vlset.o
|
||||
|
||||
include ../../lib.mk
|
||||
|
4
tools/testing/selftests/arm64/fp/TODO
Normal file
4
tools/testing/selftests/arm64/fp/TODO
Normal file
@ -0,0 +1,4 @@
|
||||
- Test unsupported values in the ABIs.
|
||||
- More coverage for ptrace (eg, vector length conversions).
|
||||
- Coverage for signals.
|
||||
- Test PR_SVE_VL_INHERITY after a double fork.
|
14
tools/testing/selftests/arm64/fp/rdvl-sve.c
Normal file
14
tools/testing/selftests/arm64/fp/rdvl-sve.c
Normal file
@ -0,0 +1,14 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include "rdvl.h"
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int vl = rdvl_sve();
|
||||
|
||||
printf("%d\n", vl);
|
||||
|
||||
return 0;
|
||||
}
|
10
tools/testing/selftests/arm64/fp/rdvl.S
Normal file
10
tools/testing/selftests/arm64/fp/rdvl.S
Normal file
@ -0,0 +1,10 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
// Copyright (C) 2021 ARM Limited.
|
||||
|
||||
.arch_extension sve
|
||||
|
||||
.globl rdvl_sve
|
||||
rdvl_sve:
|
||||
hint 34 // BTI C
|
||||
rdvl x0, #1
|
||||
ret
|
8
tools/testing/selftests/arm64/fp/rdvl.h
Normal file
8
tools/testing/selftests/arm64/fp/rdvl.h
Normal file
@ -0,0 +1,8 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef RDVL_H
|
||||
#define RDVL_H
|
||||
|
||||
int rdvl_sve(void);
|
||||
|
||||
#endif
|
@ -13,6 +13,7 @@
|
||||
#include <asm/sigcontext.h>
|
||||
|
||||
#include "../../kselftest.h"
|
||||
#include "rdvl.h"
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
@ -38,6 +39,10 @@ int main(int argc, char **argv)
|
||||
|
||||
vl &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
if (rdvl_sve() != vl)
|
||||
ksft_exit_fail_msg("PR_SVE_SET_VL reports %d, RDVL %d\n",
|
||||
vl, rdvl_sve());
|
||||
|
||||
if (!sve_vl_valid(vl))
|
||||
ksft_exit_fail_msg("VL %d invalid\n", vl);
|
||||
vq = sve_vq_from_vl(vl);
|
||||
|
593
tools/testing/selftests/arm64/fp/vec-syscfg.c
Normal file
593
tools/testing/selftests/arm64/fp/vec-syscfg.c
Normal file
@ -0,0 +1,593 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (C) 2021 ARM Limited.
|
||||
* Original author: Mark Brown <broonie@kernel.org>
|
||||
*/
|
||||
#include <assert.h>
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <stddef.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/auxv.h>
|
||||
#include <sys/prctl.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/wait.h>
|
||||
#include <asm/sigcontext.h>
|
||||
#include <asm/hwcap.h>
|
||||
|
||||
#include "../../kselftest.h"
|
||||
#include "rdvl.h"
|
||||
|
||||
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
|
||||
|
||||
#define ARCH_MIN_VL SVE_VL_MIN
|
||||
|
||||
struct vec_data {
|
||||
const char *name;
|
||||
unsigned long hwcap_type;
|
||||
unsigned long hwcap;
|
||||
const char *rdvl_binary;
|
||||
int (*rdvl)(void);
|
||||
|
||||
int prctl_get;
|
||||
int prctl_set;
|
||||
const char *default_vl_file;
|
||||
|
||||
int default_vl;
|
||||
int min_vl;
|
||||
int max_vl;
|
||||
};
|
||||
|
||||
|
||||
static struct vec_data vec_data[] = {
|
||||
{
|
||||
.name = "SVE",
|
||||
.hwcap_type = AT_HWCAP,
|
||||
.hwcap = HWCAP_SVE,
|
||||
.rdvl = rdvl_sve,
|
||||
.rdvl_binary = "./rdvl-sve",
|
||||
.prctl_get = PR_SVE_GET_VL,
|
||||
.prctl_set = PR_SVE_SET_VL,
|
||||
.default_vl_file = "/proc/sys/abi/sve_default_vector_length",
|
||||
},
|
||||
};
|
||||
|
||||
static int stdio_read_integer(FILE *f, const char *what, int *val)
|
||||
{
|
||||
int n = 0;
|
||||
int ret;
|
||||
|
||||
ret = fscanf(f, "%d%*1[\n]%n", val, &n);
|
||||
if (ret < 1 || n < 1) {
|
||||
ksft_print_msg("failed to parse integer from %s\n", what);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Start a new process and return the vector length it sees */
|
||||
static int get_child_rdvl(struct vec_data *data)
|
||||
{
|
||||
FILE *out;
|
||||
int pipefd[2];
|
||||
pid_t pid, child;
|
||||
int read_vl, ret;
|
||||
|
||||
ret = pipe(pipefd);
|
||||
if (ret == -1) {
|
||||
ksft_print_msg("pipe() failed: %d (%s)\n",
|
||||
errno, strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
fflush(stdout);
|
||||
|
||||
child = fork();
|
||||
if (child == -1) {
|
||||
ksft_print_msg("fork() failed: %d (%s)\n",
|
||||
errno, strerror(errno));
|
||||
close(pipefd[0]);
|
||||
close(pipefd[1]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Child: put vector length on the pipe */
|
||||
if (child == 0) {
|
||||
/*
|
||||
* Replace stdout with the pipe, errors to stderr from
|
||||
* here as kselftest prints to stdout.
|
||||
*/
|
||||
ret = dup2(pipefd[1], 1);
|
||||
if (ret == -1) {
|
||||
fprintf(stderr, "dup2() %d\n", errno);
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
/* exec() a new binary which puts the VL on stdout */
|
||||
ret = execl(data->rdvl_binary, data->rdvl_binary, NULL);
|
||||
fprintf(stderr, "execl(%s) failed: %d\n",
|
||||
data->rdvl_binary, errno, strerror(errno));
|
||||
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
close(pipefd[1]);
|
||||
|
||||
/* Parent; wait for the exit status from the child & verify it */
|
||||
do {
|
||||
pid = wait(&ret);
|
||||
if (pid == -1) {
|
||||
ksft_print_msg("wait() failed: %d (%s)\n",
|
||||
errno, strerror(errno));
|
||||
close(pipefd[0]);
|
||||
return -1;
|
||||
}
|
||||
} while (pid != child);
|
||||
|
||||
assert(pid == child);
|
||||
|
||||
if (!WIFEXITED(ret)) {
|
||||
ksft_print_msg("child exited abnormally\n");
|
||||
close(pipefd[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (WEXITSTATUS(ret) != 0) {
|
||||
ksft_print_msg("child returned error %d\n",
|
||||
WEXITSTATUS(ret));
|
||||
close(pipefd[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
out = fdopen(pipefd[0], "r");
|
||||
if (!out) {
|
||||
ksft_print_msg("failed to open child stdout\n");
|
||||
close(pipefd[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = stdio_read_integer(out, "child", &read_vl);
|
||||
fclose(out);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
return read_vl;
|
||||
}
|
||||
|
||||
static int file_read_integer(const char *name, int *val)
|
||||
{
|
||||
FILE *f;
|
||||
int ret;
|
||||
|
||||
f = fopen(name, "r");
|
||||
if (!f) {
|
||||
ksft_test_result_fail("Unable to open %s: %d (%s)\n",
|
||||
name, errno,
|
||||
strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = stdio_read_integer(f, name, val);
|
||||
fclose(f);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int file_write_integer(const char *name, int val)
|
||||
{
|
||||
FILE *f;
|
||||
int ret;
|
||||
|
||||
f = fopen(name, "w");
|
||||
if (!f) {
|
||||
ksft_test_result_fail("Unable to open %s: %d (%s)\n",
|
||||
name, errno,
|
||||
strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
fprintf(f, "%d", val);
|
||||
fclose(f);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("Error writing %d to %s\n",
|
||||
val, name);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Verify that we can read the default VL via proc, checking that it
|
||||
* is set in a freshly spawned child.
|
||||
*/
|
||||
static void proc_read_default(struct vec_data *data)
|
||||
{
|
||||
int default_vl, child_vl, ret;
|
||||
|
||||
ret = file_read_integer(data->default_vl_file, &default_vl);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* Is this the actual default seen by new processes? */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != default_vl) {
|
||||
ksft_test_result_fail("%s is %d but child VL is %d\n",
|
||||
data->default_vl_file,
|
||||
default_vl, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s default vector length %d\n", data->name,
|
||||
default_vl);
|
||||
data->default_vl = default_vl;
|
||||
}
|
||||
|
||||
/* Verify that we can write a minimum value and have it take effect */
|
||||
static void proc_write_min(struct vec_data *data)
|
||||
{
|
||||
int ret, new_default, child_vl;
|
||||
|
||||
if (geteuid() != 0) {
|
||||
ksft_test_result_skip("Need to be root to write to /proc\n");
|
||||
return;
|
||||
}
|
||||
|
||||
ret = file_write_integer(data->default_vl_file, ARCH_MIN_VL);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* What was the new value? */
|
||||
ret = file_read_integer(data->default_vl_file, &new_default);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* Did it take effect in a new process? */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != new_default) {
|
||||
ksft_test_result_fail("%s is %d but child VL is %d\n",
|
||||
data->default_vl_file,
|
||||
new_default, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s minimum vector length %d\n", data->name,
|
||||
new_default);
|
||||
data->min_vl = new_default;
|
||||
|
||||
file_write_integer(data->default_vl_file, data->default_vl);
|
||||
}
|
||||
|
||||
/* Verify that we can write a maximum value and have it take effect */
|
||||
static void proc_write_max(struct vec_data *data)
|
||||
{
|
||||
int ret, new_default, child_vl;
|
||||
|
||||
if (geteuid() != 0) {
|
||||
ksft_test_result_skip("Need to be root to write to /proc\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* -1 is accepted by the /proc interface as the maximum VL */
|
||||
ret = file_write_integer(data->default_vl_file, -1);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* What was the new value? */
|
||||
ret = file_read_integer(data->default_vl_file, &new_default);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* Did it take effect in a new process? */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != new_default) {
|
||||
ksft_test_result_fail("%s is %d but child VL is %d\n",
|
||||
data->default_vl_file,
|
||||
new_default, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s maximum vector length %d\n", data->name,
|
||||
new_default);
|
||||
data->max_vl = new_default;
|
||||
|
||||
file_write_integer(data->default_vl_file, data->default_vl);
|
||||
}
|
||||
|
||||
/* Can we read back a VL from prctl? */
|
||||
static void prctl_get(struct vec_data *data)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = prctl(data->prctl_get);
|
||||
if (ret == -1) {
|
||||
ksft_test_result_fail("%s prctl() read failed: %d (%s)\n",
|
||||
data->name, errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Mask out any flags */
|
||||
ret &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
/* Is that what we can read back directly? */
|
||||
if (ret == data->rdvl())
|
||||
ksft_test_result_pass("%s current VL is %d\n",
|
||||
data->name, ret);
|
||||
else
|
||||
ksft_test_result_fail("%s prctl() VL %d but RDVL is %d\n",
|
||||
data->name, ret, data->rdvl());
|
||||
}
|
||||
|
||||
/* Does the prctl let us set the VL we already have? */
|
||||
static void prctl_set_same(struct vec_data *data)
|
||||
{
|
||||
int cur_vl = data->rdvl();
|
||||
int ret;
|
||||
|
||||
ret = prctl(data->prctl_set, cur_vl);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed: %d (%s)\n",
|
||||
data->name, errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
if (cur_vl != data->rdvl())
|
||||
ksft_test_result_pass("%s current VL is %d\n",
|
||||
data->name, ret);
|
||||
else
|
||||
ksft_test_result_fail("%s prctl() VL %d but RDVL is %d\n",
|
||||
data->name, ret, data->rdvl());
|
||||
}
|
||||
|
||||
/* Can we set a new VL for this process? */
|
||||
static void prctl_set(struct vec_data *data)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (data->min_vl == data->max_vl) {
|
||||
ksft_test_result_skip("%s only one VL supported\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Try to set the minimum VL */
|
||||
ret = prctl(data->prctl_set, data->min_vl);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->min_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
if ((ret & PR_SVE_VL_LEN_MASK) != data->min_vl) {
|
||||
ksft_test_result_fail("%s prctl set %d but return value is %d\n",
|
||||
data->name, data->min_vl, data->rdvl());
|
||||
return;
|
||||
}
|
||||
|
||||
if (data->rdvl() != data->min_vl) {
|
||||
ksft_test_result_fail("%s set %d but RDVL is %d\n",
|
||||
data->name, data->min_vl, data->rdvl());
|
||||
return;
|
||||
}
|
||||
|
||||
/* Try to set the maximum VL */
|
||||
ret = prctl(data->prctl_set, data->max_vl);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->max_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
if ((ret & PR_SVE_VL_LEN_MASK) != data->max_vl) {
|
||||
ksft_test_result_fail("%s prctl() set %d but return value is %d\n",
|
||||
data->name, data->max_vl, data->rdvl());
|
||||
return;
|
||||
}
|
||||
|
||||
/* The _INHERIT flag should not be present when we read the VL */
|
||||
ret = prctl(data->prctl_get);
|
||||
if (ret == -1) {
|
||||
ksft_test_result_fail("%s prctl() read failed: %d (%s)\n",
|
||||
data->name, errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
if (ret & PR_SVE_VL_INHERIT) {
|
||||
ksft_test_result_fail("%s prctl() reports _INHERIT\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s prctl() set min/max\n", data->name);
|
||||
}
|
||||
|
||||
/* If we didn't request it a new VL shouldn't affect the child */
|
||||
static void prctl_set_no_child(struct vec_data *data)
|
||||
{
|
||||
int ret, child_vl;
|
||||
|
||||
if (data->min_vl == data->max_vl) {
|
||||
ksft_test_result_skip("%s only one VL supported\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
ret = prctl(data->prctl_set, data->min_vl);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->min_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Ensure the default VL is different */
|
||||
ret = file_write_integer(data->default_vl_file, data->max_vl);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* Check that the child has the default we just set */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != data->max_vl) {
|
||||
ksft_test_result_fail("%s is %d but child VL is %d\n",
|
||||
data->default_vl_file,
|
||||
data->max_vl, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s vector length used default\n", data->name);
|
||||
|
||||
file_write_integer(data->default_vl_file, data->default_vl);
|
||||
}
|
||||
|
||||
/* If we didn't request it a new VL shouldn't affect the child */
|
||||
static void prctl_set_for_child(struct vec_data *data)
|
||||
{
|
||||
int ret, child_vl;
|
||||
|
||||
if (data->min_vl == data->max_vl) {
|
||||
ksft_test_result_skip("%s only one VL supported\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
ret = prctl(data->prctl_set, data->min_vl | PR_SVE_VL_INHERIT);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->min_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
/* The _INHERIT flag should be present when we read the VL */
|
||||
ret = prctl(data->prctl_get);
|
||||
if (ret == -1) {
|
||||
ksft_test_result_fail("%s prctl() read failed: %d (%s)\n",
|
||||
data->name, errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
if (!(ret & PR_SVE_VL_INHERIT)) {
|
||||
ksft_test_result_fail("%s prctl() does not report _INHERIT\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Ensure the default VL is different */
|
||||
ret = file_write_integer(data->default_vl_file, data->max_vl);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
/* Check that the child inherited our VL */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != data->min_vl) {
|
||||
ksft_test_result_fail("%s is %d but child VL is %d\n",
|
||||
data->default_vl_file,
|
||||
data->min_vl, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s vector length was inherited\n", data->name);
|
||||
|
||||
file_write_integer(data->default_vl_file, data->default_vl);
|
||||
}
|
||||
|
||||
/* _ONEXEC takes effect only in the child process */
|
||||
static void prctl_set_onexec(struct vec_data *data)
|
||||
{
|
||||
int ret, child_vl;
|
||||
|
||||
if (data->min_vl == data->max_vl) {
|
||||
ksft_test_result_skip("%s only one VL supported\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Set a known value for the default and our current VL */
|
||||
ret = file_write_integer(data->default_vl_file, data->max_vl);
|
||||
if (ret != 0)
|
||||
return;
|
||||
|
||||
ret = prctl(data->prctl_set, data->max_vl);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->min_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Set a different value for the child to have on exec */
|
||||
ret = prctl(data->prctl_set, data->min_vl | PR_SVE_SET_VL_ONEXEC);
|
||||
if (ret < 0) {
|
||||
ksft_test_result_fail("%s prctl set failed for %d: %d (%s)\n",
|
||||
data->name, data->min_vl,
|
||||
errno, strerror(errno));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Our current VL should stay the same */
|
||||
if (data->rdvl() != data->max_vl) {
|
||||
ksft_test_result_fail("%s VL changed by _ONEXEC prctl()\n",
|
||||
data->name);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Check that the child inherited our VL */
|
||||
child_vl = get_child_rdvl(data);
|
||||
if (child_vl != data->min_vl) {
|
||||
ksft_test_result_fail("Set %d _ONEXEC but child VL is %d\n",
|
||||
data->min_vl, child_vl);
|
||||
return;
|
||||
}
|
||||
|
||||
ksft_test_result_pass("%s vector length set on exec\n", data->name);
|
||||
|
||||
file_write_integer(data->default_vl_file, data->default_vl);
|
||||
}
|
||||
|
||||
typedef void (*test_type)(struct vec_data *);
|
||||
|
||||
static const test_type tests[] = {
|
||||
/*
|
||||
* The default/min/max tests must be first and in this order
|
||||
* to provide data for other tests.
|
||||
*/
|
||||
proc_read_default,
|
||||
proc_write_min,
|
||||
proc_write_max,
|
||||
|
||||
prctl_get,
|
||||
prctl_set,
|
||||
prctl_set_no_child,
|
||||
prctl_set_for_child,
|
||||
prctl_set_onexec,
|
||||
};
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
ksft_print_header();
|
||||
ksft_set_plan(ARRAY_SIZE(tests) * ARRAY_SIZE(vec_data));
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(vec_data); i++) {
|
||||
struct vec_data *data = &vec_data[i];
|
||||
unsigned long supported;
|
||||
|
||||
supported = getauxval(data->hwcap_type) & data->hwcap;
|
||||
|
||||
for (j = 0; j < ARRAY_SIZE(tests); j++) {
|
||||
if (supported)
|
||||
tests[j](data);
|
||||
else
|
||||
ksft_test_result_skip("%s not supported\n",
|
||||
data->name);
|
||||
}
|
||||
}
|
||||
|
||||
ksft_exit_pass();
|
||||
}
|
1
tools/testing/selftests/arm64/mte/.gitignore
vendored
1
tools/testing/selftests/arm64/mte/.gitignore
vendored
@ -1,4 +1,5 @@
|
||||
check_buffer_fill
|
||||
check_gcr_el1_cswitch
|
||||
check_tags_inclusion
|
||||
check_child_memory
|
||||
check_mmap_options
|
||||
|
@ -298,7 +298,7 @@ int mte_default_setup(void)
|
||||
int ret;
|
||||
|
||||
if (!(hwcaps2 & HWCAP2_MTE)) {
|
||||
ksft_print_msg("FAIL: MTE features unavailable\n");
|
||||
ksft_print_msg("SKIP: MTE features unavailable\n");
|
||||
return KSFT_SKIP;
|
||||
}
|
||||
/* Get current mte mode */
|
||||
|
@ -25,13 +25,15 @@
|
||||
do { \
|
||||
unsigned long hwcaps = getauxval(AT_HWCAP); \
|
||||
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
|
||||
ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
|
||||
if (!(hwcaps & HWCAP_PACA)) \
|
||||
SKIP(return, "PAUTH not enabled"); \
|
||||
} while (0)
|
||||
#define ASSERT_GENERIC_PAUTH_ENABLED() \
|
||||
do { \
|
||||
unsigned long hwcaps = getauxval(AT_HWCAP); \
|
||||
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
|
||||
ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
|
||||
if (!(hwcaps & HWCAP_PACG)) \
|
||||
SKIP(return, "Generic PAUTH not enabled"); \
|
||||
} while (0)
|
||||
|
||||
void sign_specific(struct signatures *sign, size_t val)
|
||||
@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
|
||||
unsigned long hwcaps = getauxval(AT_HWCAP);
|
||||
|
||||
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
|
||||
ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
|
||||
ASSERT_PAUTH_ENABLED();
|
||||
if (!(hwcaps & HWCAP_PACG)) {
|
||||
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
|
||||
nkeys = NKEYS - 1;
|
||||
@ -299,7 +301,7 @@ TEST(exec_changed_keys)
|
||||
unsigned long hwcaps = getauxval(AT_HWCAP);
|
||||
|
||||
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
|
||||
ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
|
||||
ASSERT_PAUTH_ENABLED();
|
||||
if (!(hwcaps & HWCAP_PACG)) {
|
||||
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
|
||||
nkeys = NKEYS - 1;
|
||||
|
@ -1,4 +1,5 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
mangle_*
|
||||
fake_sigreturn_*
|
||||
sve_*
|
||||
!*.[ch]
|
||||
|
@ -33,10 +33,12 @@
|
||||
*/
|
||||
enum {
|
||||
FSSBS_BIT,
|
||||
FSVE_BIT,
|
||||
FMAX_END
|
||||
};
|
||||
|
||||
#define FEAT_SSBS (1UL << FSSBS_BIT)
|
||||
#define FEAT_SVE (1UL << FSVE_BIT)
|
||||
|
||||
/*
|
||||
* A descriptor used to describe and configure a test case.
|
||||
|
@ -26,6 +26,7 @@ static int sig_copyctx = SIGTRAP;
|
||||
|
||||
static char const *const feats_names[FMAX_END] = {
|
||||
" SSBS ",
|
||||
" SVE ",
|
||||
};
|
||||
|
||||
#define MAX_FEATS_SZ 128
|
||||
@ -263,6 +264,8 @@ int test_init(struct tdescr *td)
|
||||
*/
|
||||
if (getauxval(AT_HWCAP) & HWCAP_SSBS)
|
||||
td->feats_supported |= FEAT_SSBS;
|
||||
if (getauxval(AT_HWCAP) & HWCAP_SVE)
|
||||
td->feats_supported |= FEAT_SVE;
|
||||
if (feats_ok(td))
|
||||
fprintf(stderr,
|
||||
"Required Features: [%s] supported\n",
|
||||
|
2
tools/testing/selftests/arm64/signal/testcases/TODO
Normal file
2
tools/testing/selftests/arm64/signal/testcases/TODO
Normal file
@ -0,0 +1,2 @@
|
||||
- Validate that register contents are saved and restored as expected.
|
||||
- Support and validate extra_context.
|
@ -0,0 +1,92 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2021 ARM Limited
|
||||
*
|
||||
* Attempt to change the SVE vector length in a signal hander, this is not
|
||||
* supported and is expected to segfault.
|
||||
*/
|
||||
|
||||
#include <signal.h>
|
||||
#include <ucontext.h>
|
||||
#include <sys/prctl.h>
|
||||
|
||||
#include "test_signals_utils.h"
|
||||
#include "testcases.h"
|
||||
|
||||
struct fake_sigframe sf;
|
||||
static unsigned int vls[SVE_VQ_MAX];
|
||||
unsigned int nvls = 0;
|
||||
|
||||
static bool sve_get_vls(struct tdescr *td)
|
||||
{
|
||||
int vq, vl;
|
||||
|
||||
/*
|
||||
* Enumerate up to SVE_VQ_MAX vector lengths
|
||||
*/
|
||||
for (vq = SVE_VQ_MAX; vq > 0; --vq) {
|
||||
vl = prctl(PR_SVE_SET_VL, vq * 16);
|
||||
if (vl == -1)
|
||||
return false;
|
||||
|
||||
vl &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
/* Skip missing VLs */
|
||||
vq = sve_vq_from_vl(vl);
|
||||
|
||||
vls[nvls++] = vl;
|
||||
}
|
||||
|
||||
/* We need at least two VLs */
|
||||
if (nvls < 2) {
|
||||
fprintf(stderr, "Only %d VL supported\n", nvls);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int fake_sigreturn_sve_change_vl(struct tdescr *td,
|
||||
siginfo_t *si, ucontext_t *uc)
|
||||
{
|
||||
size_t resv_sz, offset;
|
||||
struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf);
|
||||
struct sve_context *sve;
|
||||
|
||||
/* Get a signal context with a SVE frame in it */
|
||||
if (!get_current_context(td, &sf.uc))
|
||||
return 1;
|
||||
|
||||
resv_sz = GET_SF_RESV_SIZE(sf);
|
||||
head = get_header(head, SVE_MAGIC, resv_sz, &offset);
|
||||
if (!head) {
|
||||
fprintf(stderr, "No SVE context\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (head->size != sizeof(struct sve_context)) {
|
||||
fprintf(stderr, "SVE register state active, skipping\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
sve = (struct sve_context *)head;
|
||||
|
||||
/* No changes are supported; init left us at minimum VL so go to max */
|
||||
fprintf(stderr, "Attempting to change VL from %d to %d\n",
|
||||
sve->vl, vls[0]);
|
||||
sve->vl = vls[0];
|
||||
|
||||
fake_sigreturn(&sf, sizeof(sf), 0);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
struct tdescr tde = {
|
||||
.name = "FAKE_SIGRETURN_SVE_CHANGE",
|
||||
.descr = "Attempt to change SVE VL",
|
||||
.feats_required = FEAT_SVE,
|
||||
.sig_ok = SIGSEGV,
|
||||
.timeout = 3,
|
||||
.init = sve_get_vls,
|
||||
.run = fake_sigreturn_sve_change_vl,
|
||||
};
|
126
tools/testing/selftests/arm64/signal/testcases/sve_regs.c
Normal file
126
tools/testing/selftests/arm64/signal/testcases/sve_regs.c
Normal file
@ -0,0 +1,126 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2021 ARM Limited
|
||||
*
|
||||
* Verify that the SVE register context in signal frames is set up as
|
||||
* expected.
|
||||
*/
|
||||
|
||||
#include <signal.h>
|
||||
#include <ucontext.h>
|
||||
#include <sys/prctl.h>
|
||||
|
||||
#include "test_signals_utils.h"
|
||||
#include "testcases.h"
|
||||
|
||||
struct fake_sigframe sf;
|
||||
static unsigned int vls[SVE_VQ_MAX];
|
||||
unsigned int nvls = 0;
|
||||
|
||||
static bool sve_get_vls(struct tdescr *td)
|
||||
{
|
||||
int vq, vl;
|
||||
|
||||
/*
|
||||
* Enumerate up to SVE_VQ_MAX vector lengths
|
||||
*/
|
||||
for (vq = SVE_VQ_MAX; vq > 0; --vq) {
|
||||
vl = prctl(PR_SVE_SET_VL, vq * 16);
|
||||
if (vl == -1)
|
||||
return false;
|
||||
|
||||
vl &= PR_SVE_VL_LEN_MASK;
|
||||
|
||||
/* Skip missing VLs */
|
||||
vq = sve_vq_from_vl(vl);
|
||||
|
||||
vls[nvls++] = vl;
|
||||
}
|
||||
|
||||
/* We need at least one VL */
|
||||
if (nvls < 1) {
|
||||
fprintf(stderr, "Only %d VL supported\n", nvls);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void setup_sve_regs(void)
|
||||
{
|
||||
/* RDVL x16, #1 so we should have SVE regs; real data is TODO */
|
||||
asm volatile(".inst 0x04bf5030" : : : "x16" );
|
||||
}
|
||||
|
||||
static int do_one_sve_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc,
|
||||
unsigned int vl)
|
||||
{
|
||||
size_t resv_sz, offset;
|
||||
struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf);
|
||||
struct sve_context *sve;
|
||||
|
||||
fprintf(stderr, "Testing VL %d\n", vl);
|
||||
|
||||
if (prctl(PR_SVE_SET_VL, vl) == -1) {
|
||||
fprintf(stderr, "Failed to set VL\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Get a signal context which should have a SVE frame and registers
|
||||
* in it.
|
||||
*/
|
||||
setup_sve_regs();
|
||||
if (!get_current_context(td, &sf.uc))
|
||||
return 1;
|
||||
|
||||
resv_sz = GET_SF_RESV_SIZE(sf);
|
||||
head = get_header(head, SVE_MAGIC, resv_sz, &offset);
|
||||
if (!head) {
|
||||
fprintf(stderr, "No SVE context\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
sve = (struct sve_context *)head;
|
||||
if (sve->vl != vl) {
|
||||
fprintf(stderr, "Got VL %d, expected %d\n", sve->vl, vl);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* The actual size validation is done in get_current_context() */
|
||||
fprintf(stderr, "Got expected size %u and VL %d\n",
|
||||
head->size, sve->vl);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sve_regs(struct tdescr *td, siginfo_t *si, ucontext_t *uc)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nvls; i++) {
|
||||
/*
|
||||
* TODO: the signal test helpers can't currently cope
|
||||
* with signal frames bigger than struct sigcontext,
|
||||
* skip VLs that will trigger that.
|
||||
*/
|
||||
if (vls[i] > 64)
|
||||
continue;
|
||||
|
||||
if (do_one_sve_vl(td, si, uc, vls[i]))
|
||||
return 1;
|
||||
}
|
||||
|
||||
td->pass = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct tdescr tde = {
|
||||
.name = "SVE registers",
|
||||
.descr = "Check that we get the right SVE registers reported",
|
||||
.feats_required = FEAT_SVE,
|
||||
.timeout = 3,
|
||||
.init = sve_get_vls,
|
||||
.run = sve_regs,
|
||||
};
|
68
tools/testing/selftests/arm64/signal/testcases/sve_vl.c
Normal file
68
tools/testing/selftests/arm64/signal/testcases/sve_vl.c
Normal file
@ -0,0 +1,68 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2021 ARM Limited
|
||||
*
|
||||
* Check that the SVE vector length reported in signal contexts is the
|
||||
* expected one.
|
||||
*/
|
||||
|
||||
#include <signal.h>
|
||||
#include <ucontext.h>
|
||||
#include <sys/prctl.h>
|
||||
|
||||
#include "test_signals_utils.h"
|
||||
#include "testcases.h"
|
||||
|
||||
struct fake_sigframe sf;
|
||||
unsigned int vl;
|
||||
|
||||
static bool get_sve_vl(struct tdescr *td)
|
||||
{
|
||||
int ret = prctl(PR_SVE_GET_VL);
|
||||
if (ret == -1)
|
||||
return false;
|
||||
|
||||
vl = ret;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int sve_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc)
|
||||
{
|
||||
size_t resv_sz, offset;
|
||||
struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf);
|
||||
struct sve_context *sve;
|
||||
|
||||
/* Get a signal context which should have a SVE frame in it */
|
||||
if (!get_current_context(td, &sf.uc))
|
||||
return 1;
|
||||
|
||||
resv_sz = GET_SF_RESV_SIZE(sf);
|
||||
head = get_header(head, SVE_MAGIC, resv_sz, &offset);
|
||||
if (!head) {
|
||||
fprintf(stderr, "No SVE context\n");
|
||||
return 1;
|
||||
}
|
||||
sve = (struct sve_context *)head;
|
||||
|
||||
if (sve->vl != vl) {
|
||||
fprintf(stderr, "sigframe VL %u, expected %u\n",
|
||||
sve->vl, vl);
|
||||
return 1;
|
||||
} else {
|
||||
fprintf(stderr, "got expected VL %u\n", vl);
|
||||
}
|
||||
|
||||
td->pass = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct tdescr tde = {
|
||||
.name = "SVE VL",
|
||||
.descr = "Check that we get the right SVE VL reported",
|
||||
.feats_required = FEAT_SVE,
|
||||
.timeout = 3,
|
||||
.init = get_sve_vl,
|
||||
.run = sve_vl,
|
||||
};
|
@ -50,12 +50,38 @@ bool validate_extra_context(struct extra_context *extra, char **err)
|
||||
return true;
|
||||
}
|
||||
|
||||
bool validate_sve_context(struct sve_context *sve, char **err)
|
||||
{
|
||||
/* Size will be rounded up to a multiple of 16 bytes */
|
||||
size_t regs_size
|
||||
= ((SVE_SIG_CONTEXT_SIZE(sve_vq_from_vl(sve->vl)) + 15) / 16) * 16;
|
||||
|
||||
if (!sve || !err)
|
||||
return false;
|
||||
|
||||
/* Either a bare sve_context or a sve_context followed by regs data */
|
||||
if ((sve->head.size != sizeof(struct sve_context)) &&
|
||||
(sve->head.size != regs_size)) {
|
||||
*err = "bad size for SVE context";
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!sve_vl_valid(sve->vl)) {
|
||||
*err = "SVE VL invalid";
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
|
||||
{
|
||||
bool terminated = false;
|
||||
size_t offs = 0;
|
||||
int flags = 0;
|
||||
struct extra_context *extra = NULL;
|
||||
struct sve_context *sve = NULL;
|
||||
struct _aarch64_ctx *head =
|
||||
(struct _aarch64_ctx *)uc->uc_mcontext.__reserved;
|
||||
|
||||
@ -90,9 +116,8 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
|
||||
case SVE_MAGIC:
|
||||
if (flags & SVE_CTX)
|
||||
*err = "Multiple SVE_MAGIC";
|
||||
else if (head->size !=
|
||||
sizeof(struct sve_context))
|
||||
*err = "Bad size for sve_context";
|
||||
/* Size is validated in validate_sve_context() */
|
||||
sve = (struct sve_context *)head;
|
||||
flags |= SVE_CTX;
|
||||
break;
|
||||
case EXTRA_MAGIC:
|
||||
@ -137,6 +162,9 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
|
||||
if (flags & EXTRA_CTX)
|
||||
if (!validate_extra_context(extra, err))
|
||||
return false;
|
||||
if (flags & SVE_CTX)
|
||||
if (!validate_sve_context(sve, err))
|
||||
return false;
|
||||
|
||||
head = GET_RESV_NEXT_HEAD(head);
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user