Merge branches 'for-next/acpi', 'for-next/boot', 'for-next/bpf', 'for-next/cpuinfo', 'for-next/fpsimd', 'for-next/misc', 'for-next/mm', 'for-next/pci', 'for-next/perf', 'for-next/ptrauth', 'for-next/sdei', 'for-next/selftests', 'for-next/stacktrace', 'for-next/svm', 'for-next/topology', 'for-next/tpyos' and 'for-next/vdso' into for-next/core
Remove unused functions and parameters from ACPI IORT code.
(Zenghui Yu via Lorenzo Pieralisi)
* for-next/acpi:
ACPI/IORT: Remove the unused inline functions
ACPI/IORT: Drop the unused @ops of iort_add_device_replay()
Remove redundant code and fix documentation of caching behaviour for the
HVC_SOFT_RESTART hypercall.
(Pingfan Liu)
* for-next/boot:
Documentation/kvm/arm: improve description of HVC_SOFT_RESTART
arm64/relocate_kernel: remove redundant code
Improve reporting of unexpected kernel traps due to BPF JIT failure.
(Will Deacon)
* for-next/bpf:
arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM
Improve robustness of user-visible HWCAP strings and their corresponding
numerical constants.
(Anshuman Khandual)
* for-next/cpuinfo:
arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions
Cleanups to handling of SVE and FPSIMD register state in preparation
for potential future optimisation of handling across syscalls.
(Julien Grall)
* for-next/fpsimd:
arm64/sve: Implement a helper to load SVE registers from FPSIMD state
arm64/sve: Implement a helper to flush SVE registers
arm64/fpsimdmacros: Allow the macro "for" to be used in more cases
arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LEN
arm64/signal: Update the comment in preserve_sve_context
arm64/fpsimd: Update documentation of do_sve_acc
Miscellaneous changes.
(Tian Tao and others)
* for-next/misc:
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
arm64: mm: Fix missing-prototypes in pageattr.c
arm64/fpsimd: Fix missing-prototypes in fpsimd.c
arm64: hibernate: Remove unused including <linux/version.h>
arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR()
arm64: Remove the unused include statements
arm64: get rid of TEXT_OFFSET
arm64: traps: Add str of description to panic() in die()
Memory management updates and cleanups.
(Anshuman Khandual and others)
* for-next/mm:
arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD
arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
arm64/mm: Unify CONT_PMD_SHIFT
arm64/mm: Unify CONT_PTE_SHIFT
arm64/mm: Remove CONT_RANGE_OFFSET
arm64/mm: Enable THP migration
arm64/mm: Change THP helpers to comply with generic MM semantics
arm64/mm/ptdump: Add address markers for BPF regions
Allow prefetchable PCI BARs to be exposed to userspace using normal
non-cacheable mappings.
(Clint Sbisa)
* for-next/pci:
arm64: Enable PCI write-combine resources under sysfs
Perf/PMU driver updates.
(Julien Thierry and others)
* for-next/perf:
perf: arm-cmn: Fix conversion specifiers for node type
perf: arm-cmn: Fix unsigned comparison to less than zero
arm_pmu: arm64: Use NMIs for PMU
arm_pmu: Introduce pmu_irq_ops
KVM: arm64: pmu: Make overflow handler NMI safe
arm64: perf: Defer irq_work to IPI_IRQ_WORK
arm64: perf: Remove PMU locking
arm64: perf: Avoid PMXEV* indirection
arm64: perf: Add missing ISB in armv8pmu_enable_counter()
perf: Add Arm CMN-600 PMU driver
perf: Add Arm CMN-600 DT binding
arm64: perf: Add support caps under sysfs
drivers/perf: thunderx2_pmu: Fix memory resource error handling
drivers/perf: xgene_pmu: Fix uninitialized resource struct
perf: arm_dsu: Support DSU ACPI devices
arm64: perf: Remove unnecessary event_idx check
drivers/perf: hisi: Add missing include of linux/module.h
arm64: perf: Add general hardware LLC events for PMUv3
Support for the Armv8.3 Pointer Authentication enhancements.
(By Amit Daniel Kachhap)
* for-next/ptrauth:
arm64: kprobe: clarify the comment of steppable hint instructions
arm64: kprobe: disable probe of fault prone ptrauth instruction
arm64: cpufeature: Modify address authentication cpufeature to exact
arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
arm64: traps: Allow force_signal_inject to pass esr error code
arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
Tonnes of cleanup to the SDEI driver.
(Gavin Shan)
* for-next/sdei:
firmware: arm_sdei: Remove _sdei_event_unregister()
firmware: arm_sdei: Remove _sdei_event_register()
firmware: arm_sdei: Introduce sdei_do_local_call()
firmware: arm_sdei: Cleanup on cross call function
firmware: arm_sdei: Remove while loop in sdei_event_unregister()
firmware: arm_sdei: Remove while loop in sdei_event_register()
firmware: arm_sdei: Remove redundant error message in sdei_probe()
firmware: arm_sdei: Remove duplicate check in sdei_get_conduit()
firmware: arm_sdei: Unregister driver on error in sdei_init()
firmware: arm_sdei: Avoid nested statements in sdei_init()
firmware: arm_sdei: Retrieve event number from event instance
firmware: arm_sdei: Common block for failing path in sdei_event_create()
firmware: arm_sdei: Remove sdei_is_err()
Selftests for Pointer Authentication and FPSIMD/SVE context-switching.
(Mark Brown and Boyan Karatotev)
* for-next/selftests:
selftests: arm64: Add build and documentation for FP tests
selftests: arm64: Add wrapper scripts for stress tests
selftests: arm64: Add utility to set SVE vector lengths
selftests: arm64: Add stress tests for FPSMID and SVE context switching
selftests: arm64: Add test for the SVE ptrace interface
selftests: arm64: Test case for enumeration of SVE vector lengths
kselftests/arm64: add PAuth tests for single threaded consistency and differently initialized keys
kselftests/arm64: add PAuth test for whether exec() changes keys
kselftests/arm64: add nop checks for PAuth tests
kselftests/arm64: add a basic Pointer Authentication test
Implementation of ARCH_STACKWALK for unwinding.
(Mark Brown)
* for-next/stacktrace:
arm64: Move console stack display code to stacktrace.c
arm64: stacktrace: Convert to ARCH_STACKWALK
arm64: stacktrace: Make stack walk callback consistent with generic code
stacktrace: Remove reliable argument from arch_stack_walk() callback
Support for ASID pinning, which is required when sharing page-tables with
the SMMU.
(Jean-Philippe Brucker)
* for-next/svm:
arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
arm64: mm: Pin down ASIDs for sharing mm with devices
Rely on firmware tables for establishing CPU topology.
(Valentin Schneider)
* for-next/topology:
arm64: topology: Stop using MPIDR for topology information
Spelling fixes.
(Xiaoming Ni and Yanfei Xu)
* for-next/tpyos:
arm64/numa: Fix a typo in comment of arm64_numa_init
arm64: fix some spelling mistakes in the comments by codespell
vDSO cleanups.
(Will Deacon)
* for-next/vdso:
arm64: vdso: Fix unusual formatting in *setup_additional_pages()
arm64: vdso32: Remove a bunch of #ifdef CONFIG_COMPAT_VDSO guards
This commit is contained in:
@@ -3,8 +3,6 @@
|
||||
# Makefile for the linux kernel.
|
||||
#
|
||||
|
||||
CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
CFLAGS_armv8_deprecated.o := -I$(src)
|
||||
|
||||
CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
|
||||
|
||||
@@ -35,6 +35,10 @@ SYM_CODE_START(__cpu_soft_restart)
|
||||
mov_q x13, SCTLR_ELx_FLAGS
|
||||
bic x12, x12, x13
|
||||
pre_disable_mmu_workaround
|
||||
/*
|
||||
* either disable EL1&0 translation regime or disable EL2&0 translation
|
||||
* regime if HCR_EL2.E2H == 1
|
||||
*/
|
||||
msr sctlr_el1, x12
|
||||
isb
|
||||
|
||||
|
||||
@@ -169,8 +169,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
|
||||
}
|
||||
#endif /* CONFIG_KVM_INDIRECT_VECTORS */
|
||||
|
||||
#include <linux/arm-smccc.h>
|
||||
|
||||
static void __maybe_unused call_smc_arch_workaround_1(void)
|
||||
{
|
||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
|
||||
|
||||
@@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
|
||||
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
|
||||
FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
|
||||
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
|
||||
FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
@@ -1111,6 +1111,7 @@ u64 read_sanitised_ftr_reg(u32 id)
|
||||
return 0;
|
||||
return regp->sys_val;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
|
||||
|
||||
#define read_sysreg_case(r) \
|
||||
case r: return read_sysreg_s(r)
|
||||
@@ -1443,6 +1444,7 @@ static inline void __cpu_enable_hw_dbm(void)
|
||||
|
||||
write_sysreg(tcr, tcr_el1);
|
||||
isb();
|
||||
local_flush_tlb_all();
|
||||
}
|
||||
|
||||
static bool cpu_has_broken_dbm(void)
|
||||
@@ -1648,11 +1650,37 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
|
||||
#endif /* CONFIG_ARM64_RAS_EXTN */
|
||||
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
|
||||
int __unused)
|
||||
static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
|
||||
{
|
||||
return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
|
||||
__system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
|
||||
int boot_val, sec_val;
|
||||
|
||||
/* We don't expect to be called with SCOPE_SYSTEM */
|
||||
WARN_ON(scope == SCOPE_SYSTEM);
|
||||
/*
|
||||
* The ptr-auth feature levels are not intercompatible with lower
|
||||
* levels. Hence we must match ptr-auth feature level of the secondary
|
||||
* CPUs with that of the boot CPU. The level of boot cpu is fetched
|
||||
* from the sanitised register whereas direct register read is done for
|
||||
* the secondary CPUs.
|
||||
* The sanitised feature state is guaranteed to match that of the
|
||||
* boot CPU as a mismatched secondary CPU is parked before it gets
|
||||
* a chance to update the state, with the capability.
|
||||
*/
|
||||
boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
|
||||
entry->field_pos, entry->sign);
|
||||
if (scope & SCOPE_BOOT_CPU)
|
||||
return boot_val >= entry->min_field_value;
|
||||
/* Now check for the secondary CPUs with SCOPE_LOCAL_CPU scope */
|
||||
sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
|
||||
entry->field_pos, entry->sign);
|
||||
return sec_val == boot_val;
|
||||
}
|
||||
|
||||
static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
|
||||
int scope)
|
||||
{
|
||||
return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
|
||||
has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
|
||||
}
|
||||
|
||||
static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
|
||||
@@ -2021,7 +2049,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_APA_SHIFT,
|
||||
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
|
||||
.matches = has_cpuid_feature,
|
||||
.matches = has_address_auth_cpucap,
|
||||
},
|
||||
{
|
||||
.desc = "Address authentication (IMP DEF algorithm)",
|
||||
@@ -2031,12 +2059,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_API_SHIFT,
|
||||
.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
|
||||
.matches = has_cpuid_feature,
|
||||
.matches = has_address_auth_cpucap,
|
||||
},
|
||||
{
|
||||
.capability = ARM64_HAS_ADDRESS_AUTH,
|
||||
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
|
||||
.matches = has_address_auth,
|
||||
.matches = has_address_auth_metacap,
|
||||
},
|
||||
{
|
||||
.desc = "Generic authentication (architected algorithm)",
|
||||
|
||||
@@ -43,94 +43,92 @@ static const char *icache_policy_str[] = {
|
||||
unsigned long __icache_flags;
|
||||
|
||||
static const char *const hwcap_str[] = {
|
||||
"fp",
|
||||
"asimd",
|
||||
"evtstrm",
|
||||
"aes",
|
||||
"pmull",
|
||||
"sha1",
|
||||
"sha2",
|
||||
"crc32",
|
||||
"atomics",
|
||||
"fphp",
|
||||
"asimdhp",
|
||||
"cpuid",
|
||||
"asimdrdm",
|
||||
"jscvt",
|
||||
"fcma",
|
||||
"lrcpc",
|
||||
"dcpop",
|
||||
"sha3",
|
||||
"sm3",
|
||||
"sm4",
|
||||
"asimddp",
|
||||
"sha512",
|
||||
"sve",
|
||||
"asimdfhm",
|
||||
"dit",
|
||||
"uscat",
|
||||
"ilrcpc",
|
||||
"flagm",
|
||||
"ssbs",
|
||||
"sb",
|
||||
"paca",
|
||||
"pacg",
|
||||
"dcpodp",
|
||||
"sve2",
|
||||
"sveaes",
|
||||
"svepmull",
|
||||
"svebitperm",
|
||||
"svesha3",
|
||||
"svesm4",
|
||||
"flagm2",
|
||||
"frint",
|
||||
"svei8mm",
|
||||
"svef32mm",
|
||||
"svef64mm",
|
||||
"svebf16",
|
||||
"i8mm",
|
||||
"bf16",
|
||||
"dgh",
|
||||
"rng",
|
||||
"bti",
|
||||
/* reserved for "mte" */
|
||||
NULL
|
||||
[KERNEL_HWCAP_FP] = "fp",
|
||||
[KERNEL_HWCAP_ASIMD] = "asimd",
|
||||
[KERNEL_HWCAP_EVTSTRM] = "evtstrm",
|
||||
[KERNEL_HWCAP_AES] = "aes",
|
||||
[KERNEL_HWCAP_PMULL] = "pmull",
|
||||
[KERNEL_HWCAP_SHA1] = "sha1",
|
||||
[KERNEL_HWCAP_SHA2] = "sha2",
|
||||
[KERNEL_HWCAP_CRC32] = "crc32",
|
||||
[KERNEL_HWCAP_ATOMICS] = "atomics",
|
||||
[KERNEL_HWCAP_FPHP] = "fphp",
|
||||
[KERNEL_HWCAP_ASIMDHP] = "asimdhp",
|
||||
[KERNEL_HWCAP_CPUID] = "cpuid",
|
||||
[KERNEL_HWCAP_ASIMDRDM] = "asimdrdm",
|
||||
[KERNEL_HWCAP_JSCVT] = "jscvt",
|
||||
[KERNEL_HWCAP_FCMA] = "fcma",
|
||||
[KERNEL_HWCAP_LRCPC] = "lrcpc",
|
||||
[KERNEL_HWCAP_DCPOP] = "dcpop",
|
||||
[KERNEL_HWCAP_SHA3] = "sha3",
|
||||
[KERNEL_HWCAP_SM3] = "sm3",
|
||||
[KERNEL_HWCAP_SM4] = "sm4",
|
||||
[KERNEL_HWCAP_ASIMDDP] = "asimddp",
|
||||
[KERNEL_HWCAP_SHA512] = "sha512",
|
||||
[KERNEL_HWCAP_SVE] = "sve",
|
||||
[KERNEL_HWCAP_ASIMDFHM] = "asimdfhm",
|
||||
[KERNEL_HWCAP_DIT] = "dit",
|
||||
[KERNEL_HWCAP_USCAT] = "uscat",
|
||||
[KERNEL_HWCAP_ILRCPC] = "ilrcpc",
|
||||
[KERNEL_HWCAP_FLAGM] = "flagm",
|
||||
[KERNEL_HWCAP_SSBS] = "ssbs",
|
||||
[KERNEL_HWCAP_SB] = "sb",
|
||||
[KERNEL_HWCAP_PACA] = "paca",
|
||||
[KERNEL_HWCAP_PACG] = "pacg",
|
||||
[KERNEL_HWCAP_DCPODP] = "dcpodp",
|
||||
[KERNEL_HWCAP_SVE2] = "sve2",
|
||||
[KERNEL_HWCAP_SVEAES] = "sveaes",
|
||||
[KERNEL_HWCAP_SVEPMULL] = "svepmull",
|
||||
[KERNEL_HWCAP_SVEBITPERM] = "svebitperm",
|
||||
[KERNEL_HWCAP_SVESHA3] = "svesha3",
|
||||
[KERNEL_HWCAP_SVESM4] = "svesm4",
|
||||
[KERNEL_HWCAP_FLAGM2] = "flagm2",
|
||||
[KERNEL_HWCAP_FRINT] = "frint",
|
||||
[KERNEL_HWCAP_SVEI8MM] = "svei8mm",
|
||||
[KERNEL_HWCAP_SVEF32MM] = "svef32mm",
|
||||
[KERNEL_HWCAP_SVEF64MM] = "svef64mm",
|
||||
[KERNEL_HWCAP_SVEBF16] = "svebf16",
|
||||
[KERNEL_HWCAP_I8MM] = "i8mm",
|
||||
[KERNEL_HWCAP_BF16] = "bf16",
|
||||
[KERNEL_HWCAP_DGH] = "dgh",
|
||||
[KERNEL_HWCAP_RNG] = "rng",
|
||||
[KERNEL_HWCAP_BTI] = "bti",
|
||||
};
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define COMPAT_KERNEL_HWCAP(x) const_ilog2(COMPAT_HWCAP_ ## x)
|
||||
static const char *const compat_hwcap_str[] = {
|
||||
"swp",
|
||||
"half",
|
||||
"thumb",
|
||||
"26bit",
|
||||
"fastmult",
|
||||
"fpa",
|
||||
"vfp",
|
||||
"edsp",
|
||||
"java",
|
||||
"iwmmxt",
|
||||
"crunch",
|
||||
"thumbee",
|
||||
"neon",
|
||||
"vfpv3",
|
||||
"vfpv3d16",
|
||||
"tls",
|
||||
"vfpv4",
|
||||
"idiva",
|
||||
"idivt",
|
||||
"vfpd32",
|
||||
"lpae",
|
||||
"evtstrm",
|
||||
NULL
|
||||
[COMPAT_KERNEL_HWCAP(SWP)] = "swp",
|
||||
[COMPAT_KERNEL_HWCAP(HALF)] = "half",
|
||||
[COMPAT_KERNEL_HWCAP(THUMB)] = "thumb",
|
||||
[COMPAT_KERNEL_HWCAP(26BIT)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(FAST_MULT)] = "fastmult",
|
||||
[COMPAT_KERNEL_HWCAP(FPA)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(VFP)] = "vfp",
|
||||
[COMPAT_KERNEL_HWCAP(EDSP)] = "edsp",
|
||||
[COMPAT_KERNEL_HWCAP(JAVA)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(IWMMXT)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(CRUNCH)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(THUMBEE)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(NEON)] = "neon",
|
||||
[COMPAT_KERNEL_HWCAP(VFPv3)] = "vfpv3",
|
||||
[COMPAT_KERNEL_HWCAP(VFPV3D16)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(TLS)] = "tls",
|
||||
[COMPAT_KERNEL_HWCAP(VFPv4)] = "vfpv4",
|
||||
[COMPAT_KERNEL_HWCAP(IDIVA)] = "idiva",
|
||||
[COMPAT_KERNEL_HWCAP(IDIVT)] = "idivt",
|
||||
[COMPAT_KERNEL_HWCAP(VFPD32)] = NULL, /* Not possible on arm64 */
|
||||
[COMPAT_KERNEL_HWCAP(LPAE)] = "lpae",
|
||||
[COMPAT_KERNEL_HWCAP(EVTSTRM)] = "evtstrm",
|
||||
};
|
||||
|
||||
#define COMPAT_KERNEL_HWCAP2(x) const_ilog2(COMPAT_HWCAP2_ ## x)
|
||||
static const char *const compat_hwcap2_str[] = {
|
||||
"aes",
|
||||
"pmull",
|
||||
"sha1",
|
||||
"sha2",
|
||||
"crc32",
|
||||
NULL
|
||||
[COMPAT_KERNEL_HWCAP2(AES)] = "aes",
|
||||
[COMPAT_KERNEL_HWCAP2(PMULL)] = "pmull",
|
||||
[COMPAT_KERNEL_HWCAP2(SHA1)] = "sha1",
|
||||
[COMPAT_KERNEL_HWCAP2(SHA2)] = "sha2",
|
||||
[COMPAT_KERNEL_HWCAP2(CRC32)] = "crc32",
|
||||
};
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
@@ -166,16 +164,25 @@ static int c_show(struct seq_file *m, void *v)
|
||||
seq_puts(m, "Features\t:");
|
||||
if (compat) {
|
||||
#ifdef CONFIG_COMPAT
|
||||
for (j = 0; compat_hwcap_str[j]; j++)
|
||||
if (compat_elf_hwcap & (1 << j))
|
||||
seq_printf(m, " %s", compat_hwcap_str[j]);
|
||||
for (j = 0; j < ARRAY_SIZE(compat_hwcap_str); j++) {
|
||||
if (compat_elf_hwcap & (1 << j)) {
|
||||
/*
|
||||
* Warn once if any feature should not
|
||||
* have been present on arm64 platform.
|
||||
*/
|
||||
if (WARN_ON_ONCE(!compat_hwcap_str[j]))
|
||||
continue;
|
||||
|
||||
for (j = 0; compat_hwcap2_str[j]; j++)
|
||||
seq_printf(m, " %s", compat_hwcap_str[j]);
|
||||
}
|
||||
}
|
||||
|
||||
for (j = 0; j < ARRAY_SIZE(compat_hwcap2_str); j++)
|
||||
if (compat_elf_hwcap2 & (1 << j))
|
||||
seq_printf(m, " %s", compat_hwcap2_str[j]);
|
||||
#endif /* CONFIG_COMPAT */
|
||||
} else {
|
||||
for (j = 0; hwcap_str[j]; j++)
|
||||
for (j = 0; j < ARRAY_SIZE(hwcap_str); j++)
|
||||
if (cpu_have_feature(j))
|
||||
seq_printf(m, " %s", hwcap_str[j]);
|
||||
}
|
||||
|
||||
@@ -384,7 +384,7 @@ void __init debug_traps_init(void)
|
||||
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
|
||||
TRAP_TRACE, "single-step handler");
|
||||
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
|
||||
TRAP_BRKPT, "ptrace BRK handler");
|
||||
TRAP_BRKPT, "BRK handler");
|
||||
}
|
||||
|
||||
/* Re-enable single step for syscall restarting. */
|
||||
|
||||
@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
|
||||
}
|
||||
NOKPROBE_SYMBOL(el1_dbg);
|
||||
|
||||
static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
local_daif_inherit(regs);
|
||||
do_ptrauth_fault(regs, esr);
|
||||
}
|
||||
NOKPROBE_SYMBOL(el1_fpac);
|
||||
|
||||
asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long esr = read_sysreg(esr_el1);
|
||||
@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
|
||||
case ESR_ELx_EC_BRK64:
|
||||
el1_dbg(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_FPAC:
|
||||
el1_fpac(regs, esr);
|
||||
break;
|
||||
default:
|
||||
el1_inv(regs, esr);
|
||||
}
|
||||
@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
|
||||
}
|
||||
NOKPROBE_SYMBOL(el0_svc);
|
||||
|
||||
static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
|
||||
{
|
||||
user_exit_irqoff();
|
||||
local_daif_restore(DAIF_PROCCTX);
|
||||
do_ptrauth_fault(regs, esr);
|
||||
}
|
||||
NOKPROBE_SYMBOL(el0_fpac);
|
||||
|
||||
asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long esr = read_sysreg(esr_el1);
|
||||
@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
|
||||
case ESR_ELx_EC_BRK64:
|
||||
el0_dbg(regs, esr);
|
||||
break;
|
||||
case ESR_ELx_EC_FPAC:
|
||||
el0_fpac(regs, esr);
|
||||
break;
|
||||
default:
|
||||
el0_inv(regs, esr);
|
||||
}
|
||||
|
||||
@@ -32,6 +32,7 @@ SYM_FUNC_START(fpsimd_load_state)
|
||||
SYM_FUNC_END(fpsimd_load_state)
|
||||
|
||||
#ifdef CONFIG_ARM64_SVE
|
||||
|
||||
SYM_FUNC_START(sve_save_state)
|
||||
sve_save 0, x1, 2
|
||||
ret
|
||||
@@ -46,4 +47,28 @@ SYM_FUNC_START(sve_get_vl)
|
||||
_sve_rdvl 0, 1
|
||||
ret
|
||||
SYM_FUNC_END(sve_get_vl)
|
||||
|
||||
/*
|
||||
* Load SVE state from FPSIMD state.
|
||||
*
|
||||
* x0 = pointer to struct fpsimd_state
|
||||
* x1 = VQ - 1
|
||||
*
|
||||
* Each SVE vector will be loaded with the first 128-bits taken from FPSIMD
|
||||
* and the rest zeroed. All the other SVE registers will be zeroed.
|
||||
*/
|
||||
SYM_FUNC_START(sve_load_from_fpsimd_state)
|
||||
sve_load_vq x1, x2, x3
|
||||
fpsimd_restore x0, 8
|
||||
_for n, 0, 15, _sve_pfalse \n
|
||||
_sve_wrffr 0
|
||||
ret
|
||||
SYM_FUNC_END(sve_load_from_fpsimd_state)
|
||||
|
||||
/* Zero all SVE registers but the first 128-bits of each vector */
|
||||
SYM_FUNC_START(sve_flush_live)
|
||||
sve_flush
|
||||
ret
|
||||
SYM_FUNC_END(sve_flush_live)
|
||||
|
||||
#endif /* CONFIG_ARM64_SVE */
|
||||
|
||||
@@ -32,9 +32,11 @@
|
||||
#include <linux/swab.h>
|
||||
|
||||
#include <asm/esr.h>
|
||||
#include <asm/exception.h>
|
||||
#include <asm/fpsimd.h>
|
||||
#include <asm/cpufeature.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/neon.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/simd.h>
|
||||
#include <asm/sigcontext.h>
|
||||
@@ -312,7 +314,7 @@ static void fpsimd_save(void)
|
||||
* re-enter user with corrupt state.
|
||||
* There's no way to recover, so kill it:
|
||||
*/
|
||||
force_signal_inject(SIGKILL, SI_KERNEL, 0);
|
||||
force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -928,7 +930,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
|
||||
* the SVE access trap will be disabled the next time this task
|
||||
* reaches ret_to_user.
|
||||
*
|
||||
* TIF_SVE should be clear on entry: otherwise, task_fpsimd_load()
|
||||
* TIF_SVE should be clear on entry: otherwise, fpsimd_restore_current_state()
|
||||
* would have disabled the SVE access trap for userspace during
|
||||
* ret_to_user, making an SVE access trap impossible in that case.
|
||||
*/
|
||||
@@ -936,7 +938,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
|
||||
{
|
||||
/* Even if we chose not to use SVE, the hardware could still trap: */
|
||||
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -36,14 +36,10 @@
|
||||
|
||||
#include "efi-header.S"
|
||||
|
||||
#define __PHYS_OFFSET (KERNEL_START - TEXT_OFFSET)
|
||||
#define __PHYS_OFFSET KERNEL_START
|
||||
|
||||
#if (TEXT_OFFSET & 0xfff) != 0
|
||||
#error TEXT_OFFSET must be at least 4KB aligned
|
||||
#elif (PAGE_OFFSET & 0x1fffff) != 0
|
||||
#if (PAGE_OFFSET & 0x1fffff) != 0
|
||||
#error PAGE_OFFSET must be at least 2MB aligned
|
||||
#elif TEXT_OFFSET > 0x1fffff
|
||||
#error TEXT_OFFSET must be less than 2MB
|
||||
#endif
|
||||
|
||||
/*
|
||||
@@ -55,7 +51,7 @@
|
||||
* x0 = physical address to the FDT blob.
|
||||
*
|
||||
* This code is mostly position independent so you call this at
|
||||
* __pa(PAGE_OFFSET + TEXT_OFFSET).
|
||||
* __pa(PAGE_OFFSET).
|
||||
*
|
||||
* Note that the callee-saved registers are used for storing variables
|
||||
* that are useful before the MMU is enabled. The allocations are described
|
||||
@@ -77,7 +73,7 @@ _head:
|
||||
b primary_entry // branch to kernel start, magic
|
||||
.long 0 // reserved
|
||||
#endif
|
||||
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian
|
||||
.quad 0 // Image load offset from start of RAM, little-endian
|
||||
le64sym _kernel_size_le // Effective size of kernel image, little-endian
|
||||
le64sym _kernel_flags_le // Informative flags, little-endian
|
||||
.quad 0 // reserved
|
||||
@@ -382,7 +378,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
|
||||
* Map the kernel image (starting with PHYS_OFFSET).
|
||||
*/
|
||||
adrp x0, init_pg_dir
|
||||
mov_q x5, KIMAGE_VADDR + TEXT_OFFSET // compile time __va(_text)
|
||||
mov_q x5, KIMAGE_VADDR // compile time __va(_text)
|
||||
add x5, x5, x23 // add KASLR displacement
|
||||
mov x4, PTRS_PER_PGD
|
||||
adrp x6, _end // runtime __pa(_end)
|
||||
@@ -474,7 +470,7 @@ SYM_FUNC_END(__primary_switched)
|
||||
|
||||
.pushsection ".rodata", "a"
|
||||
SYM_DATA_START(kimage_vaddr)
|
||||
.quad _text - TEXT_OFFSET
|
||||
.quad _text
|
||||
SYM_DATA_END(kimage_vaddr)
|
||||
EXPORT_SYMBOL(kimage_vaddr)
|
||||
.popsection
|
||||
|
||||
@@ -21,7 +21,6 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/utsname.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
#include <asm/barrier.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
@@ -62,7 +62,6 @@
|
||||
*/
|
||||
#define HEAD_SYMBOLS \
|
||||
DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \
|
||||
DEFINE_IMAGE_LE64(_kernel_offset_le, TEXT_OFFSET); \
|
||||
DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS);
|
||||
|
||||
#endif /* __ARM64_KERNEL_IMAGE_H */
|
||||
|
||||
@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
|
||||
case AARCH64_INSN_HINT_XPACLRI:
|
||||
case AARCH64_INSN_HINT_PACIA_1716:
|
||||
case AARCH64_INSN_HINT_PACIB_1716:
|
||||
case AARCH64_INSN_HINT_AUTIA_1716:
|
||||
case AARCH64_INSN_HINT_AUTIB_1716:
|
||||
case AARCH64_INSN_HINT_PACIAZ:
|
||||
case AARCH64_INSN_HINT_PACIASP:
|
||||
case AARCH64_INSN_HINT_PACIBZ:
|
||||
case AARCH64_INSN_HINT_PACIBSP:
|
||||
case AARCH64_INSN_HINT_AUTIAZ:
|
||||
case AARCH64_INSN_HINT_AUTIASP:
|
||||
case AARCH64_INSN_HINT_AUTIBZ:
|
||||
case AARCH64_INSN_HINT_AUTIBSP:
|
||||
case AARCH64_INSN_HINT_BTI:
|
||||
case AARCH64_INSN_HINT_BTIC:
|
||||
case AARCH64_INSN_HINT_BTIJ:
|
||||
@@ -176,7 +170,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
|
||||
|
||||
bool __kprobes aarch64_insn_is_branch(u32 insn)
|
||||
{
|
||||
/* b, bl, cb*, tb*, b.cond, br, blr */
|
||||
/* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
|
||||
|
||||
return aarch64_insn_is_b(insn) ||
|
||||
aarch64_insn_is_bl(insn) ||
|
||||
@@ -185,8 +179,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
|
||||
aarch64_insn_is_tbz(insn) ||
|
||||
aarch64_insn_is_tbnz(insn) ||
|
||||
aarch64_insn_is_ret(insn) ||
|
||||
aarch64_insn_is_ret_auth(insn) ||
|
||||
aarch64_insn_is_br(insn) ||
|
||||
aarch64_insn_is_br_auth(insn) ||
|
||||
aarch64_insn_is_blr(insn) ||
|
||||
aarch64_insn_is_blr_auth(insn) ||
|
||||
aarch64_insn_is_bcond(insn);
|
||||
}
|
||||
|
||||
|
||||
@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
|
||||
* whist unwinding the stackframe and is like a subroutine return so we use
|
||||
* the PC.
|
||||
*/
|
||||
static int callchain_trace(struct stackframe *frame, void *data)
|
||||
static bool callchain_trace(void *data, unsigned long pc)
|
||||
{
|
||||
struct perf_callchain_entry_ctx *entry = data;
|
||||
perf_callchain_store(entry, frame->pc);
|
||||
return 0;
|
||||
perf_callchain_store(entry, pc);
|
||||
return true;
|
||||
}
|
||||
|
||||
void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
|
||||
|
||||
@@ -69,6 +69,9 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
|
||||
[C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
|
||||
[C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB,
|
||||
|
||||
[C(LL)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD,
|
||||
[C(LL)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_LL_CACHE_RD,
|
||||
|
||||
[C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
|
||||
[C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
|
||||
};
|
||||
@@ -302,13 +305,33 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
|
||||
.attrs = armv8_pmuv3_format_attrs,
|
||||
};
|
||||
|
||||
static ssize_t slots_show(struct device *dev, struct device_attribute *attr,
|
||||
char *page)
|
||||
{
|
||||
struct pmu *pmu = dev_get_drvdata(dev);
|
||||
struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
|
||||
u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK;
|
||||
|
||||
return snprintf(page, PAGE_SIZE, "0x%08x\n", slots);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(slots);
|
||||
|
||||
static struct attribute *armv8_pmuv3_caps_attrs[] = {
|
||||
&dev_attr_slots.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group armv8_pmuv3_caps_attr_group = {
|
||||
.name = "caps",
|
||||
.attrs = armv8_pmuv3_caps_attrs,
|
||||
};
|
||||
|
||||
/*
|
||||
* Perf Events' indices
|
||||
*/
|
||||
#define ARMV8_IDX_CYCLE_COUNTER 0
|
||||
#define ARMV8_IDX_COUNTER0 1
|
||||
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
|
||||
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
|
||||
|
||||
|
||||
/*
|
||||
@@ -348,6 +371,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
|
||||
#define ARMV8_IDX_TO_COUNTER(x) \
|
||||
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
|
||||
|
||||
/*
|
||||
* This code is really good
|
||||
*/
|
||||
|
||||
#define PMEVN_CASE(n, case_macro) \
|
||||
case n: case_macro(n); break
|
||||
|
||||
#define PMEVN_SWITCH(x, case_macro) \
|
||||
do { \
|
||||
switch (x) { \
|
||||
PMEVN_CASE(0, case_macro); \
|
||||
PMEVN_CASE(1, case_macro); \
|
||||
PMEVN_CASE(2, case_macro); \
|
||||
PMEVN_CASE(3, case_macro); \
|
||||
PMEVN_CASE(4, case_macro); \
|
||||
PMEVN_CASE(5, case_macro); \
|
||||
PMEVN_CASE(6, case_macro); \
|
||||
PMEVN_CASE(7, case_macro); \
|
||||
PMEVN_CASE(8, case_macro); \
|
||||
PMEVN_CASE(9, case_macro); \
|
||||
PMEVN_CASE(10, case_macro); \
|
||||
PMEVN_CASE(11, case_macro); \
|
||||
PMEVN_CASE(12, case_macro); \
|
||||
PMEVN_CASE(13, case_macro); \
|
||||
PMEVN_CASE(14, case_macro); \
|
||||
PMEVN_CASE(15, case_macro); \
|
||||
PMEVN_CASE(16, case_macro); \
|
||||
PMEVN_CASE(17, case_macro); \
|
||||
PMEVN_CASE(18, case_macro); \
|
||||
PMEVN_CASE(19, case_macro); \
|
||||
PMEVN_CASE(20, case_macro); \
|
||||
PMEVN_CASE(21, case_macro); \
|
||||
PMEVN_CASE(22, case_macro); \
|
||||
PMEVN_CASE(23, case_macro); \
|
||||
PMEVN_CASE(24, case_macro); \
|
||||
PMEVN_CASE(25, case_macro); \
|
||||
PMEVN_CASE(26, case_macro); \
|
||||
PMEVN_CASE(27, case_macro); \
|
||||
PMEVN_CASE(28, case_macro); \
|
||||
PMEVN_CASE(29, case_macro); \
|
||||
PMEVN_CASE(30, case_macro); \
|
||||
default: WARN(1, "Invalid PMEV* index\n"); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define RETURN_READ_PMEVCNTRN(n) \
|
||||
return read_sysreg(pmevcntr##n##_el0)
|
||||
static unsigned long read_pmevcntrn(int n)
|
||||
{
|
||||
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define WRITE_PMEVCNTRN(n) \
|
||||
write_sysreg(val, pmevcntr##n##_el0)
|
||||
static void write_pmevcntrn(int n, unsigned long val)
|
||||
{
|
||||
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
|
||||
}
|
||||
|
||||
#define WRITE_PMEVTYPERN(n) \
|
||||
write_sysreg(val, pmevtyper##n##_el0)
|
||||
static void write_pmevtypern(int n, unsigned long val)
|
||||
{
|
||||
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
|
||||
}
|
||||
|
||||
static inline u32 armv8pmu_pmcr_read(void)
|
||||
{
|
||||
return read_sysreg(pmcr_el0);
|
||||
@@ -365,28 +455,16 @@ static inline int armv8pmu_has_overflowed(u32 pmovsr)
|
||||
return pmovsr & ARMV8_PMU_OVERFLOWED_MASK;
|
||||
}
|
||||
|
||||
static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx)
|
||||
{
|
||||
return idx >= ARMV8_IDX_CYCLE_COUNTER &&
|
||||
idx <= ARMV8_IDX_COUNTER_LAST(cpu_pmu);
|
||||
}
|
||||
|
||||
static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
|
||||
{
|
||||
return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
|
||||
}
|
||||
|
||||
static inline void armv8pmu_select_counter(int idx)
|
||||
static inline u32 armv8pmu_read_evcntr(int idx)
|
||||
{
|
||||
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
|
||||
write_sysreg(counter, pmselr_el0);
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline u64 armv8pmu_read_evcntr(int idx)
|
||||
{
|
||||
armv8pmu_select_counter(idx);
|
||||
return read_sysreg(pmxevcntr_el0);
|
||||
return read_pmevcntrn(counter);
|
||||
}
|
||||
|
||||
static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
|
||||
@@ -440,15 +518,11 @@ static u64 armv8pmu_unbias_long_counter(struct perf_event *event, u64 value)
|
||||
|
||||
static u64 armv8pmu_read_counter(struct perf_event *event)
|
||||
{
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
struct hw_perf_event *hwc = &event->hw;
|
||||
int idx = hwc->idx;
|
||||
u64 value = 0;
|
||||
|
||||
if (!armv8pmu_counter_valid(cpu_pmu, idx))
|
||||
pr_err("CPU%u reading wrong counter %d\n",
|
||||
smp_processor_id(), idx);
|
||||
else if (idx == ARMV8_IDX_CYCLE_COUNTER)
|
||||
if (idx == ARMV8_IDX_CYCLE_COUNTER)
|
||||
value = read_sysreg(pmccntr_el0);
|
||||
else
|
||||
value = armv8pmu_read_hw_counter(event);
|
||||
@@ -458,8 +532,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
|
||||
|
||||
static inline void armv8pmu_write_evcntr(int idx, u64 value)
|
||||
{
|
||||
armv8pmu_select_counter(idx);
|
||||
write_sysreg(value, pmxevcntr_el0);
|
||||
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
|
||||
|
||||
write_pmevcntrn(counter, value);
|
||||
}
|
||||
|
||||
static inline void armv8pmu_write_hw_counter(struct perf_event *event,
|
||||
@@ -477,16 +552,12 @@ static inline void armv8pmu_write_hw_counter(struct perf_event *event,
|
||||
|
||||
static void armv8pmu_write_counter(struct perf_event *event, u64 value)
|
||||
{
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
struct hw_perf_event *hwc = &event->hw;
|
||||
int idx = hwc->idx;
|
||||
|
||||
value = armv8pmu_bias_long_counter(event, value);
|
||||
|
||||
if (!armv8pmu_counter_valid(cpu_pmu, idx))
|
||||
pr_err("CPU%u writing wrong counter %d\n",
|
||||
smp_processor_id(), idx);
|
||||
else if (idx == ARMV8_IDX_CYCLE_COUNTER)
|
||||
if (idx == ARMV8_IDX_CYCLE_COUNTER)
|
||||
write_sysreg(value, pmccntr_el0);
|
||||
else
|
||||
armv8pmu_write_hw_counter(event, value);
|
||||
@@ -494,9 +565,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
|
||||
|
||||
static inline void armv8pmu_write_evtype(int idx, u32 val)
|
||||
{
|
||||
armv8pmu_select_counter(idx);
|
||||
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
|
||||
|
||||
val &= ARMV8_PMU_EVTYPE_MASK;
|
||||
write_sysreg(val, pmxevtyper_el0);
|
||||
write_pmevtypern(counter, val);
|
||||
}
|
||||
|
||||
static inline void armv8pmu_write_event_type(struct perf_event *event)
|
||||
@@ -516,7 +588,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
|
||||
armv8pmu_write_evtype(idx - 1, hwc->config_base);
|
||||
armv8pmu_write_evtype(idx, chain_evt);
|
||||
} else {
|
||||
armv8pmu_write_evtype(idx, hwc->config_base);
|
||||
if (idx == ARMV8_IDX_CYCLE_COUNTER)
|
||||
write_sysreg(hwc->config_base, pmccfiltr_el0);
|
||||
else
|
||||
armv8pmu_write_evtype(idx, hwc->config_base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -532,6 +607,11 @@ static u32 armv8pmu_event_cnten_mask(struct perf_event *event)
|
||||
|
||||
static inline void armv8pmu_enable_counter(u32 mask)
|
||||
{
|
||||
/*
|
||||
* Make sure event configuration register writes are visible before we
|
||||
* enable the counter.
|
||||
* */
|
||||
isb();
|
||||
write_sysreg(mask, pmcntenset_el0);
|
||||
}
|
||||
|
||||
@@ -550,6 +630,11 @@ static inline void armv8pmu_enable_event_counter(struct perf_event *event)
|
||||
static inline void armv8pmu_disable_counter(u32 mask)
|
||||
{
|
||||
write_sysreg(mask, pmcntenclr_el0);
|
||||
/*
|
||||
* Make sure the effects of disabling the counter are visible before we
|
||||
* start configuring the event.
|
||||
*/
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline void armv8pmu_disable_event_counter(struct perf_event *event)
|
||||
@@ -606,15 +691,10 @@ static inline u32 armv8pmu_getreset_flags(void)
|
||||
|
||||
static void armv8pmu_enable_event(struct perf_event *event)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
|
||||
/*
|
||||
* Enable counter and interrupt, and set the counter to count
|
||||
* the event that we're interested in.
|
||||
*/
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
|
||||
/*
|
||||
* Disable counter
|
||||
@@ -622,7 +702,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
|
||||
armv8pmu_disable_event_counter(event);
|
||||
|
||||
/*
|
||||
* Set event (if destined for PMNx counters).
|
||||
* Set event.
|
||||
*/
|
||||
armv8pmu_write_event_type(event);
|
||||
|
||||
@@ -635,21 +715,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
|
||||
* Enable counter
|
||||
*/
|
||||
armv8pmu_enable_event_counter(event);
|
||||
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
static void armv8pmu_disable_event(struct perf_event *event)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
|
||||
/*
|
||||
* Disable counter and interrupt
|
||||
*/
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
|
||||
/*
|
||||
* Disable counter
|
||||
*/
|
||||
@@ -659,30 +728,18 @@ static void armv8pmu_disable_event(struct perf_event *event)
|
||||
* Disable interrupt for this counter
|
||||
*/
|
||||
armv8pmu_disable_event_irq(event);
|
||||
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
static void armv8pmu_start(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
/* Enable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
|
||||
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
/* Disable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
|
||||
@@ -735,20 +792,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
|
||||
if (!armpmu_event_set_period(event))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Perf event overflow will queue the processing of the event as
|
||||
* an irq_work which will be taken care of in the handling of
|
||||
* IPI_IRQ_WORK.
|
||||
*/
|
||||
if (perf_event_overflow(event, &data, regs))
|
||||
cpu_pmu->disable(event);
|
||||
}
|
||||
armv8pmu_start(cpu_pmu);
|
||||
|
||||
/*
|
||||
* Handle the pending perf events.
|
||||
*
|
||||
* Note: this call *must* be run with interrupts disabled. For
|
||||
* platforms that can have the PMU interrupts raised as an NMI, this
|
||||
* will not work.
|
||||
*/
|
||||
irq_work_run();
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
@@ -997,6 +1050,12 @@ static void __armv8pmu_probe_pmu(void *info)
|
||||
|
||||
bitmap_from_arr32(cpu_pmu->pmceid_ext_bitmap,
|
||||
pmceid, ARMV8_PMUV3_MAX_COMMON_EVENTS);
|
||||
|
||||
/* store PMMIR_EL1 register for sysfs */
|
||||
if (pmuver >= ID_AA64DFR0_PMUVER_8_4 && (pmceid_raw[1] & BIT(31)))
|
||||
cpu_pmu->reg_pmmir = read_cpuid(PMMIR_EL1);
|
||||
else
|
||||
cpu_pmu->reg_pmmir = 0;
|
||||
}
|
||||
|
||||
static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
|
||||
@@ -1019,7 +1078,8 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
|
||||
static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
|
||||
int (*map_event)(struct perf_event *event),
|
||||
const struct attribute_group *events,
|
||||
const struct attribute_group *format)
|
||||
const struct attribute_group *format,
|
||||
const struct attribute_group *caps)
|
||||
{
|
||||
int ret = armv8pmu_probe_pmu(cpu_pmu);
|
||||
if (ret)
|
||||
@@ -1044,104 +1104,112 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
|
||||
events : &armv8_pmuv3_events_attr_group;
|
||||
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_FORMATS] = format ?
|
||||
format : &armv8_pmuv3_format_attr_group;
|
||||
cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_CAPS] = caps ?
|
||||
caps : &armv8_pmuv3_caps_attr_group;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int armv8_pmu_init_nogroups(struct arm_pmu *cpu_pmu, char *name,
|
||||
int (*map_event)(struct perf_event *event))
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, name, map_event, NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_pmuv3",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_pmuv3",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a34_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a34",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a34",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a35_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a35",
|
||||
armv8_a53_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a35",
|
||||
armv8_a53_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a53",
|
||||
armv8_a53_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a53",
|
||||
armv8_a53_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a55_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a55",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a55",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a57",
|
||||
armv8_a57_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a57",
|
||||
armv8_a57_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a65_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a65",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a65",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a72",
|
||||
armv8_a57_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a72",
|
||||
armv8_a57_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a73_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a73",
|
||||
armv8_a73_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a73",
|
||||
armv8_a73_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a75_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a75",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a75",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a76_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a76",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a76",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_a77_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cortex_a77",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cortex_a77",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_e1_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_neoverse_e1",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_neoverse_e1",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_n1_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_neoverse_n1",
|
||||
armv8_pmuv3_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_neoverse_n1",
|
||||
armv8_pmuv3_map_event);
|
||||
}
|
||||
|
||||
static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_cavium_thunder",
|
||||
armv8_thunder_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_cavium_thunder",
|
||||
armv8_thunder_map_event);
|
||||
}
|
||||
|
||||
static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
|
||||
{
|
||||
return armv8_pmu_init(cpu_pmu, "armv8_brcm_vulcan",
|
||||
armv8_vulcan_map_event, NULL, NULL);
|
||||
return armv8_pmu_init_nogroups(cpu_pmu, "armv8_brcm_vulcan",
|
||||
armv8_vulcan_map_event);
|
||||
}
|
||||
|
||||
static const struct of_device_id armv8_pmu_of_device_ids[] = {
|
||||
|
||||
@@ -16,7 +16,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
|
||||
|
||||
/*
|
||||
* Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but
|
||||
* we're stuck with it for ABI compatability reasons.
|
||||
* we're stuck with it for ABI compatibility reasons.
|
||||
*
|
||||
* For a 32-bit consumer inspecting a 32-bit task, then it will look at
|
||||
* the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h).
|
||||
|
||||
@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
|
||||
aarch64_insn_is_msr_imm(insn) ||
|
||||
aarch64_insn_is_msr_reg(insn) ||
|
||||
aarch64_insn_is_exception(insn) ||
|
||||
aarch64_insn_is_eret(insn))
|
||||
aarch64_insn_is_eret(insn) ||
|
||||
aarch64_insn_is_eret_auth(insn))
|
||||
return false;
|
||||
|
||||
/*
|
||||
@@ -42,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
|
||||
!= AARCH64_INSN_SPCLREG_DAIF;
|
||||
|
||||
/*
|
||||
* The HINT instruction is is problematic when single-stepping,
|
||||
* except for the NOP case.
|
||||
* The HINT instruction is steppable only if it is in whitelist
|
||||
* and the rest of other such instructions are blocked for
|
||||
* single stepping as they may cause exception or other
|
||||
* unintended behaviour.
|
||||
*/
|
||||
if (aarch64_insn_is_hint(insn))
|
||||
return aarch64_insn_is_steppable_hint(insn);
|
||||
|
||||
@@ -36,18 +36,6 @@ SYM_CODE_START(arm64_relocate_new_kernel)
|
||||
mov x14, xzr /* x14 = entry ptr */
|
||||
mov x13, xzr /* x13 = copy dest */
|
||||
|
||||
/* Clear the sctlr_el2 flags. */
|
||||
mrs x0, CurrentEL
|
||||
cmp x0, #CurrentEL_EL2
|
||||
b.ne 1f
|
||||
mrs x0, sctlr_el2
|
||||
mov_q x1, SCTLR_ELx_FLAGS
|
||||
bic x0, x0, x1
|
||||
pre_disable_mmu_workaround
|
||||
msr sctlr_el2, x0
|
||||
isb
|
||||
1:
|
||||
|
||||
/* Check if the new image needs relocation. */
|
||||
tbnz x16, IND_DONE_BIT, .Ldone
|
||||
|
||||
|
||||
@@ -18,16 +18,16 @@ struct return_address_data {
|
||||
void *addr;
|
||||
};
|
||||
|
||||
static int save_return_addr(struct stackframe *frame, void *d)
|
||||
static bool save_return_addr(void *d, unsigned long pc)
|
||||
{
|
||||
struct return_address_data *data = d;
|
||||
|
||||
if (!data->level) {
|
||||
data->addr = (void *)frame->pc;
|
||||
return 1;
|
||||
data->addr = (void *)pc;
|
||||
return false;
|
||||
} else {
|
||||
--data->level;
|
||||
return 0;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
NOKPROBE_SYMBOL(save_return_addr);
|
||||
|
||||
@@ -244,7 +244,8 @@ static int preserve_sve_context(struct sve_context __user *ctx)
|
||||
if (vq) {
|
||||
/*
|
||||
* This assumes that the SVE state has already been saved to
|
||||
* the task struct by calling preserve_fpsimd_context().
|
||||
* the task struct by calling the function
|
||||
* fpsimd_signal_preserve_current_state().
|
||||
*/
|
||||
err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
|
||||
current->thread.sve_state,
|
||||
|
||||
@@ -83,9 +83,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
|
||||
|
||||
/*
|
||||
* We write the release address as LE regardless of the native
|
||||
* endianess of the kernel. Therefore, any boot-loaders that
|
||||
* endianness of the kernel. Therefore, any boot-loaders that
|
||||
* read this address need to convert this address to the
|
||||
* boot-loader's endianess before jumping. This is mandated by
|
||||
* boot-loader's endianness before jumping. This is mandated by
|
||||
* the boot protocol.
|
||||
*/
|
||||
writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
|
||||
|
||||
@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
|
||||
NOKPROBE_SYMBOL(unwind_frame);
|
||||
|
||||
void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
|
||||
int (*fn)(struct stackframe *, void *), void *data)
|
||||
bool (*fn)(void *, unsigned long), void *data)
|
||||
{
|
||||
while (1) {
|
||||
int ret;
|
||||
|
||||
if (fn(frame, data))
|
||||
if (!fn(data, frame->pc))
|
||||
break;
|
||||
ret = unwind_frame(tsk, frame);
|
||||
if (ret < 0)
|
||||
@@ -132,84 +132,89 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
|
||||
}
|
||||
NOKPROBE_SYMBOL(walk_stackframe);
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
struct stack_trace_data {
|
||||
struct stack_trace *trace;
|
||||
unsigned int no_sched_functions;
|
||||
unsigned int skip;
|
||||
};
|
||||
|
||||
static int save_trace(struct stackframe *frame, void *d)
|
||||
static void dump_backtrace_entry(unsigned long where, const char *loglvl)
|
||||
{
|
||||
struct stack_trace_data *data = d;
|
||||
struct stack_trace *trace = data->trace;
|
||||
unsigned long addr = frame->pc;
|
||||
printk("%s %pS\n", loglvl, (void *)where);
|
||||
}
|
||||
|
||||
if (data->no_sched_functions && in_sched_functions(addr))
|
||||
return 0;
|
||||
if (data->skip) {
|
||||
data->skip--;
|
||||
return 0;
|
||||
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
||||
const char *loglvl)
|
||||
{
|
||||
struct stackframe frame;
|
||||
int skip = 0;
|
||||
|
||||
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
|
||||
|
||||
if (regs) {
|
||||
if (user_mode(regs))
|
||||
return;
|
||||
skip = 1;
|
||||
}
|
||||
|
||||
trace->entries[trace->nr_entries++] = addr;
|
||||
|
||||
return trace->nr_entries >= trace->max_entries;
|
||||
}
|
||||
|
||||
void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
|
||||
{
|
||||
struct stack_trace_data data;
|
||||
struct stackframe frame;
|
||||
|
||||
data.trace = trace;
|
||||
data.skip = trace->skip;
|
||||
data.no_sched_functions = 0;
|
||||
|
||||
start_backtrace(&frame, regs->regs[29], regs->pc);
|
||||
walk_stackframe(current, &frame, save_trace, &data);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(save_stack_trace_regs);
|
||||
|
||||
static noinline void __save_stack_trace(struct task_struct *tsk,
|
||||
struct stack_trace *trace, unsigned int nosched)
|
||||
{
|
||||
struct stack_trace_data data;
|
||||
struct stackframe frame;
|
||||
if (!tsk)
|
||||
tsk = current;
|
||||
|
||||
if (!try_get_task_stack(tsk))
|
||||
return;
|
||||
|
||||
data.trace = trace;
|
||||
data.skip = trace->skip;
|
||||
data.no_sched_functions = nosched;
|
||||
|
||||
if (tsk != current) {
|
||||
start_backtrace(&frame, thread_saved_fp(tsk),
|
||||
thread_saved_pc(tsk));
|
||||
} else {
|
||||
/* We don't want this function nor the caller */
|
||||
data.skip += 2;
|
||||
if (tsk == current) {
|
||||
start_backtrace(&frame,
|
||||
(unsigned long)__builtin_frame_address(0),
|
||||
(unsigned long)__save_stack_trace);
|
||||
(unsigned long)dump_backtrace);
|
||||
} else {
|
||||
/*
|
||||
* task blocked in __switch_to
|
||||
*/
|
||||
start_backtrace(&frame,
|
||||
thread_saved_fp(tsk),
|
||||
thread_saved_pc(tsk));
|
||||
}
|
||||
|
||||
walk_stackframe(tsk, &frame, save_trace, &data);
|
||||
printk("%sCall trace:\n", loglvl);
|
||||
do {
|
||||
/* skip until specified stack frame */
|
||||
if (!skip) {
|
||||
dump_backtrace_entry(frame.pc, loglvl);
|
||||
} else if (frame.fp == regs->regs[29]) {
|
||||
skip = 0;
|
||||
/*
|
||||
* Mostly, this is the case where this function is
|
||||
* called in panic/abort. As exception handler's
|
||||
* stack frame does not contain the corresponding pc
|
||||
* at which an exception has taken place, use regs->pc
|
||||
* instead.
|
||||
*/
|
||||
dump_backtrace_entry(regs->pc, loglvl);
|
||||
}
|
||||
} while (!unwind_frame(tsk, &frame));
|
||||
|
||||
put_task_stack(tsk);
|
||||
}
|
||||
|
||||
void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
|
||||
void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
|
||||
{
|
||||
__save_stack_trace(tsk, trace, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
|
||||
|
||||
void save_stack_trace(struct stack_trace *trace)
|
||||
{
|
||||
__save_stack_trace(current, trace, 0);
|
||||
dump_backtrace(NULL, tsk, loglvl);
|
||||
barrier();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
|
||||
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
|
||||
struct task_struct *task, struct pt_regs *regs)
|
||||
{
|
||||
struct stackframe frame;
|
||||
|
||||
if (regs)
|
||||
start_backtrace(&frame, regs->regs[29], regs->pc);
|
||||
else if (task == current)
|
||||
start_backtrace(&frame,
|
||||
(unsigned long)__builtin_frame_address(0),
|
||||
(unsigned long)arch_stack_walk);
|
||||
else
|
||||
start_backtrace(&frame, thread_saved_fp(task),
|
||||
thread_saved_pc(task));
|
||||
|
||||
walk_stackframe(task, &frame, consume_entry, cookie);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL_GPL(save_stack_trace);
|
||||
#endif
|
||||
|
||||
@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid)
|
||||
if (mpidr & MPIDR_UP_BITMASK)
|
||||
return;
|
||||
|
||||
/* Create cpu topology mapping based on MPIDR. */
|
||||
if (mpidr & MPIDR_MT_BITMASK) {
|
||||
/* Multiprocessor system : Multi-threads per core */
|
||||
cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
|
||||
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1);
|
||||
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 2) |
|
||||
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 8;
|
||||
} else {
|
||||
/* Multiprocessor system : Single-thread per core */
|
||||
cpuid_topo->thread_id = -1;
|
||||
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
|
||||
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 1) |
|
||||
MPIDR_AFFINITY_LEVEL(mpidr, 2) << 8 |
|
||||
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 16;
|
||||
}
|
||||
/*
|
||||
* This would be the place to create cpu topology based on MPIDR.
|
||||
*
|
||||
* However, it cannot be trusted to depict the actual topology; some
|
||||
* pieces of the architecture enforce an artificial cap on Aff0 values
|
||||
* (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
|
||||
* artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
|
||||
* having absolutely no relationship to the actual underlying system
|
||||
* topology, and cannot be reasonably used as core / package ID.
|
||||
*
|
||||
* If the MT bit is set, Aff0 *could* be used to define a thread ID, but
|
||||
* we still wouldn't be able to obtain a sane core ID. This means we
|
||||
* need to entirely ignore MPIDR for any topology deduction.
|
||||
*/
|
||||
cpuid_topo->thread_id = -1;
|
||||
cpuid_topo->core_id = cpuid;
|
||||
cpuid_topo->package_id = cpu_to_node(cpuid);
|
||||
|
||||
pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
|
||||
cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
#include <asm/daifflags.h>
|
||||
#include <asm/debug-monitors.h>
|
||||
#include <asm/esr.h>
|
||||
#include <asm/extable.h>
|
||||
#include <asm/insn.h>
|
||||
#include <asm/kprobes.h>
|
||||
#include <asm/traps.h>
|
||||
@@ -53,11 +54,6 @@ static const char *handler[]= {
|
||||
|
||||
int show_unhandled_signals = 0;
|
||||
|
||||
static void dump_backtrace_entry(unsigned long where, const char *loglvl)
|
||||
{
|
||||
printk("%s %pS\n", loglvl, (void *)where);
|
||||
}
|
||||
|
||||
static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
|
||||
{
|
||||
unsigned long addr = instruction_pointer(regs);
|
||||
@@ -83,66 +79,6 @@ static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
|
||||
printk("%sCode: %s\n", lvl, str);
|
||||
}
|
||||
|
||||
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
|
||||
const char *loglvl)
|
||||
{
|
||||
struct stackframe frame;
|
||||
int skip = 0;
|
||||
|
||||
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
|
||||
|
||||
if (regs) {
|
||||
if (user_mode(regs))
|
||||
return;
|
||||
skip = 1;
|
||||
}
|
||||
|
||||
if (!tsk)
|
||||
tsk = current;
|
||||
|
||||
if (!try_get_task_stack(tsk))
|
||||
return;
|
||||
|
||||
if (tsk == current) {
|
||||
start_backtrace(&frame,
|
||||
(unsigned long)__builtin_frame_address(0),
|
||||
(unsigned long)dump_backtrace);
|
||||
} else {
|
||||
/*
|
||||
* task blocked in __switch_to
|
||||
*/
|
||||
start_backtrace(&frame,
|
||||
thread_saved_fp(tsk),
|
||||
thread_saved_pc(tsk));
|
||||
}
|
||||
|
||||
printk("%sCall trace:\n", loglvl);
|
||||
do {
|
||||
/* skip until specified stack frame */
|
||||
if (!skip) {
|
||||
dump_backtrace_entry(frame.pc, loglvl);
|
||||
} else if (frame.fp == regs->regs[29]) {
|
||||
skip = 0;
|
||||
/*
|
||||
* Mostly, this is the case where this function is
|
||||
* called in panic/abort. As exception handler's
|
||||
* stack frame does not contain the corresponding pc
|
||||
* at which an exception has taken place, use regs->pc
|
||||
* instead.
|
||||
*/
|
||||
dump_backtrace_entry(regs->pc, loglvl);
|
||||
}
|
||||
} while (!unwind_frame(tsk, &frame));
|
||||
|
||||
put_task_stack(tsk);
|
||||
}
|
||||
|
||||
void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
|
||||
{
|
||||
dump_backtrace(NULL, tsk, loglvl);
|
||||
barrier();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PREEMPT
|
||||
#define S_PREEMPT " PREEMPT"
|
||||
#elif defined(CONFIG_PREEMPT_RT)
|
||||
@@ -200,9 +136,9 @@ void die(const char *str, struct pt_regs *regs, int err)
|
||||
oops_exit();
|
||||
|
||||
if (in_interrupt())
|
||||
panic("Fatal exception in interrupt");
|
||||
panic("%s: Fatal exception in interrupt", str);
|
||||
if (panic_on_oops)
|
||||
panic("Fatal exception");
|
||||
panic("%s: Fatal exception", str);
|
||||
|
||||
raw_spin_unlock_irqrestore(&die_lock, flags);
|
||||
|
||||
@@ -412,7 +348,7 @@ exit:
|
||||
return fn ? fn(regs, instr) : 1;
|
||||
}
|
||||
|
||||
void force_signal_inject(int signal, int code, unsigned long address)
|
||||
void force_signal_inject(int signal, int code, unsigned long address, unsigned int err)
|
||||
{
|
||||
const char *desc;
|
||||
struct pt_regs *regs = current_pt_regs();
|
||||
@@ -438,7 +374,7 @@ void force_signal_inject(int signal, int code, unsigned long address)
|
||||
signal = SIGKILL;
|
||||
}
|
||||
|
||||
arm64_notify_die(desc, regs, signal, code, (void __user *)address, 0);
|
||||
arm64_notify_die(desc, regs, signal, code, (void __user *)address, err);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -455,7 +391,7 @@ void arm64_notify_segfault(unsigned long addr)
|
||||
code = SEGV_ACCERR;
|
||||
mmap_read_unlock(current->mm);
|
||||
|
||||
force_signal_inject(SIGSEGV, code, addr);
|
||||
force_signal_inject(SIGSEGV, code, addr, 0);
|
||||
}
|
||||
|
||||
void do_undefinstr(struct pt_regs *regs)
|
||||
@@ -468,17 +404,28 @@ void do_undefinstr(struct pt_regs *regs)
|
||||
return;
|
||||
|
||||
BUG_ON(!user_mode(regs));
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_undefinstr);
|
||||
|
||||
void do_bti(struct pt_regs *regs)
|
||||
{
|
||||
BUG_ON(!user_mode(regs));
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_bti);
|
||||
|
||||
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
|
||||
{
|
||||
/*
|
||||
* Unexpected FPAC exception or pointer authentication failure in
|
||||
* the kernel: kill the task before it does any more harm.
|
||||
*/
|
||||
BUG_ON(!user_mode(regs));
|
||||
force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
|
||||
}
|
||||
NOKPROBE_SYMBOL(do_ptrauth_fault);
|
||||
|
||||
#define __user_cache_maint(insn, address, res) \
|
||||
if (address >= user_addr_max()) { \
|
||||
res = -EFAULT; \
|
||||
@@ -528,7 +475,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
|
||||
__user_cache_maint("ic ivau", address, ret);
|
||||
break;
|
||||
default:
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -581,7 +528,7 @@ static void mrs_handler(unsigned int esr, struct pt_regs *regs)
|
||||
sysreg = esr_sys64_to_sysreg(esr);
|
||||
|
||||
if (do_emulate_mrs(regs, sysreg, rt) != 0)
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
|
||||
}
|
||||
|
||||
static void wfi_handler(unsigned int esr, struct pt_regs *regs)
|
||||
@@ -775,6 +722,7 @@ static const char *esr_class_str[] = {
|
||||
[ESR_ELx_EC_SYS64] = "MSR/MRS (AArch64)",
|
||||
[ESR_ELx_EC_SVE] = "SVE",
|
||||
[ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB",
|
||||
[ESR_ELx_EC_FPAC] = "FPAC",
|
||||
[ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF",
|
||||
[ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)",
|
||||
[ESR_ELx_EC_IABT_CUR] = "IABT (current EL)",
|
||||
@@ -935,26 +883,6 @@ asmlinkage void enter_from_user_mode(void)
|
||||
}
|
||||
NOKPROBE_SYMBOL(enter_from_user_mode);
|
||||
|
||||
void __pte_error(const char *file, int line, unsigned long val)
|
||||
{
|
||||
pr_err("%s:%d: bad pte %016lx.\n", file, line, val);
|
||||
}
|
||||
|
||||
void __pmd_error(const char *file, int line, unsigned long val)
|
||||
{
|
||||
pr_err("%s:%d: bad pmd %016lx.\n", file, line, val);
|
||||
}
|
||||
|
||||
void __pud_error(const char *file, int line, unsigned long val)
|
||||
{
|
||||
pr_err("%s:%d: bad pud %016lx.\n", file, line, val);
|
||||
}
|
||||
|
||||
void __pgd_error(const char *file, int line, unsigned long val)
|
||||
{
|
||||
pr_err("%s:%d: bad pgd %016lx.\n", file, line, val);
|
||||
}
|
||||
|
||||
/* GENERIC_BUG traps */
|
||||
|
||||
int is_valid_bugaddr(unsigned long addr)
|
||||
@@ -994,6 +922,21 @@ static struct break_hook bug_break_hook = {
|
||||
.imm = BUG_BRK_IMM,
|
||||
};
|
||||
|
||||
static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
|
||||
{
|
||||
pr_err("%s generated an invalid instruction at %pS!\n",
|
||||
in_bpf_jit(regs) ? "BPF JIT" : "Kernel text patching",
|
||||
(void *)instruction_pointer(regs));
|
||||
|
||||
/* We cannot handle this */
|
||||
return DBG_HOOK_ERROR;
|
||||
}
|
||||
|
||||
static struct break_hook fault_break_hook = {
|
||||
.fn = reserved_fault_handler,
|
||||
.imm = FAULT_BRK_IMM,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
|
||||
#define KASAN_ESR_RECOVER 0x20
|
||||
@@ -1059,6 +1002,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
|
||||
void __init trap_init(void)
|
||||
{
|
||||
register_kernel_break_hook(&bug_break_hook);
|
||||
register_kernel_break_hook(&fault_break_hook);
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
register_kernel_break_hook(&kasan_break_hook);
|
||||
#endif
|
||||
|
||||
@@ -30,15 +30,11 @@
|
||||
#include <asm/vdso.h>
|
||||
|
||||
extern char vdso_start[], vdso_end[];
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
extern char vdso32_start[], vdso32_end[];
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
|
||||
enum vdso_abi {
|
||||
VDSO_ABI_AA64,
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
VDSO_ABI_AA32,
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
};
|
||||
|
||||
enum vvar_pages {
|
||||
@@ -284,21 +280,17 @@ up_fail:
|
||||
/*
|
||||
* Create and map the vectors page for AArch32 tasks.
|
||||
*/
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
static int aarch32_vdso_mremap(const struct vm_special_mapping *sm,
|
||||
struct vm_area_struct *new_vma)
|
||||
{
|
||||
return __vdso_remap(VDSO_ABI_AA32, sm, new_vma);
|
||||
}
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
|
||||
enum aarch32_map {
|
||||
AA32_MAP_VECTORS, /* kuser helpers */
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
AA32_MAP_SIGPAGE,
|
||||
AA32_MAP_VVAR,
|
||||
AA32_MAP_VDSO,
|
||||
#endif
|
||||
AA32_MAP_SIGPAGE
|
||||
};
|
||||
|
||||
static struct page *aarch32_vectors_page __ro_after_init;
|
||||
@@ -309,7 +301,10 @@ static struct vm_special_mapping aarch32_vdso_maps[] = {
|
||||
.name = "[vectors]", /* ABI */
|
||||
.pages = &aarch32_vectors_page,
|
||||
},
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
[AA32_MAP_SIGPAGE] = {
|
||||
.name = "[sigpage]", /* ABI */
|
||||
.pages = &aarch32_sig_page,
|
||||
},
|
||||
[AA32_MAP_VVAR] = {
|
||||
.name = "[vvar]",
|
||||
.fault = vvar_fault,
|
||||
@@ -319,11 +314,6 @@ static struct vm_special_mapping aarch32_vdso_maps[] = {
|
||||
.name = "[vdso]",
|
||||
.mremap = aarch32_vdso_mremap,
|
||||
},
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
[AA32_MAP_SIGPAGE] = {
|
||||
.name = "[sigpage]", /* ABI */
|
||||
.pages = &aarch32_sig_page,
|
||||
},
|
||||
};
|
||||
|
||||
static int aarch32_alloc_kuser_vdso_page(void)
|
||||
@@ -362,25 +352,25 @@ static int aarch32_alloc_sigpage(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
static int __aarch32_alloc_vdso_pages(void)
|
||||
{
|
||||
|
||||
if (!IS_ENABLED(CONFIG_COMPAT_VDSO))
|
||||
return 0;
|
||||
|
||||
vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR];
|
||||
vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO];
|
||||
|
||||
return __vdso_init(VDSO_ABI_AA32);
|
||||
}
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
|
||||
static int __init aarch32_alloc_vdso_pages(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
ret = __aarch32_alloc_vdso_pages();
|
||||
if (ret)
|
||||
return ret;
|
||||
#endif
|
||||
|
||||
ret = aarch32_alloc_sigpage();
|
||||
if (ret)
|
||||
@@ -449,14 +439,12 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
#ifdef CONFIG_COMPAT_VDSO
|
||||
ret = __setup_additional_pages(VDSO_ABI_AA32,
|
||||
mm,
|
||||
bprm,
|
||||
uses_interp);
|
||||
if (ret)
|
||||
goto out;
|
||||
#endif /* CONFIG_COMPAT_VDSO */
|
||||
if (IS_ENABLED(CONFIG_COMPAT_VDSO)) {
|
||||
ret = __setup_additional_pages(VDSO_ABI_AA32, mm, bprm,
|
||||
uses_interp);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = aarch32_sigreturn_setup(mm);
|
||||
out:
|
||||
@@ -497,8 +485,7 @@ static int __init vdso_init(void)
|
||||
}
|
||||
arch_initcall(vdso_init);
|
||||
|
||||
int arch_setup_additional_pages(struct linux_binprm *bprm,
|
||||
int uses_interp)
|
||||
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
||||
{
|
||||
struct mm_struct *mm = current->mm;
|
||||
int ret;
|
||||
@@ -506,11 +493,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
|
||||
if (mmap_write_lock_killable(mm))
|
||||
return -EINTR;
|
||||
|
||||
ret = __setup_additional_pages(VDSO_ABI_AA64,
|
||||
mm,
|
||||
bprm,
|
||||
uses_interp);
|
||||
|
||||
ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp);
|
||||
mmap_write_unlock(mm);
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -105,7 +105,7 @@ SECTIONS
|
||||
*(.eh_frame)
|
||||
}
|
||||
|
||||
. = KIMAGE_VADDR + TEXT_OFFSET;
|
||||
. = KIMAGE_VADDR;
|
||||
|
||||
.head.text : {
|
||||
_text = .;
|
||||
@@ -274,4 +274,4 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
|
||||
/*
|
||||
* If padding is applied before .head.text, virt<->phys conversions will fail.
|
||||
*/
|
||||
ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned")
|
||||
ASSERT(_text == KIMAGE_VADDR, "HEAD is misaligned")
|
||||
|
||||
Reference in New Issue
Block a user