This patch adds support for BE8 AArch32 tasks to the compat layer.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for the aarch64_be ELF format to the AArch64 ELF
loader.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For big-endian processors, we must include
linux/byteorder/big_endian.h to get the relevant definitions for
swabbing between CPU order and a defined endianness.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the basic infrastructure necessary to support
CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
support will depend on an implementation's cpu_operations (e.g. PSCI).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may
tells us something useful (i.e. how to hotplug it back on), so we must
read it along with all the enable-method for all the other CPUs. Even
on UP the enable-method may tell us useful information (e.g. if a core
has some mechanism that might be usable for cpuidle), so we should
always read it.
This patch factors out the reading of the enable method, and ensures
that CPU0's enable method is read regardless of whether the kernel is
built with SMP support.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
where they sit before they are launched into the kernel proper.
With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.
A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.
The cpu_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).
Documentation is added for cpu_operations.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For hotplug support, we're going to want a place to store operations
that do more than bring CPUs online, and it makes sense to group these
with our current smp_enable_ops. For cpuidle support, we'll want to
group additional functions, and we may want them even for UP kernels.
This patch renames smp_enable_ops to the more general cpu_operations,
and pulls the definitions out of smp code such that they can be used in
UP kernels. While we're at it, fix up instances of the cpu parameter to
be an unsigned int, drop the init markings and rename the *_cpu
functions to cpu_* to reduce future churn when cpu_operations is
extended.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The functions in psci.c are only used from smp_psci.c, and smp_psci
cannot function without psci.c. Additionally psci.c is built when !SMP,
where it's expected that cpu_suspend may be useful.
This patch unifies the two files, removing pointless duplication and
paving the way for PSCI support in UP systems.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch introduces cmpxchg64_relaxed for arm64 using the existing
cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits)
without barrier semantics.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can
deal with 8-bytes (as one would hope!).
This patch wires up the cmpxchg-based lockless lockref implementation
for arm64.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch introduces a ticket lock implementation for arm64, along the
same lines as the implementation for arch/arm/.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In ftrace_syscall_enter(),
syscall_get_arguments(..., 0, n, ...)
if (i == 0) { <handle orig_x0> ...; n--;}
memcpy(..., n * sizeof(args[0]));
If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in
syscall_get_arguments(), none of arguments should be copied by memcpy().
Otherwise 'n--' can be a big positive number and unexpected amount of data
will be copied. Tracing system calls which take no argument, say sync(void),
may hit this case and eventually make the system corrupted.
This patch fixes the issue both in syscall_get_arguments() and
syscall_set_arguments().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
get_user() is defined as a function macro in arm64, and trace_get_user()
calls it as followed:
get_user(ch, ptr++);
Since the second parameter occurs twice in the definition, 'ptr++' is
unexpectedly evaluated twice and trace_get_user() will generate a bogus
string from user-provided one. As a result, some ftrace sysfs operations,
like "echo FUNCNAME > set_ftrace_filter," hit this case and eventually fail.
This patch fixes the issue both in get_user() and put_user().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[catalin.marinas@arm.com: added __user type annotation and s/optr/__p/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in
a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit.
This patch widens elf_hwcap to be 64 bit.
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ignored by the CPU).
- Kernel mode NEON (no users for arm64 yet but work in progress).
- arm64 kernel Image header extended to accommodate future EFI stub.
- Remove BogoMIPS reporting (not relevant, it's just the timer
frequency).
- Clean-up (EM_AARCH64/EM_ARM to elf-em.h, ELF notes in read-only
segment, unused variable).
- Bug-fixes (RAM boundaries not 2MB aligned, perf, includes).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJSKgPAAAoJEGvWsS0AyF7x4QgP/1Mgb1BwkRaDcIif45hp0ERh
qg/9nAGb3XJWkmxNvqoZK2rDtY1mCJcIf/SvlcJLASV6DBfdSEoXNNEQs+n4zwg0
ifStpq1u/Evf0TXeMeUSATgulHoIZswdXrn/exCBmJq3nlOB3Suee8gas0MCjm4Q
JhcDiXjUpCE5yjKSS6BxXewB7BVSYMvhlWTDECRo27Uo4lyAzvak/aUfQHatS9Ho
dpr9/yVl5eSsKJqdgMHfUr0LC6rEg0z6xJOHa8gACSOl4qTUCAI1wKtRYcQ0IQ+l
7FBm6DYFcgT+ZjwnvQjGYvhvTHKo+qXq7WJLPJPHJLxeA9MmQoXYrroDo80Yv7K8
7tciBbLHO24K0P6bDDtHesMXRIgWStMPhGWzLrLNPmleL2i9w85eSKt3lSMwAq+t
SdzwJuWYL1iB9XFRom3Ls4NpcVK6RjJ+y/KnI0IIH+ytuDZNM/deXZ4WiUBjYoUm
yCMA5vX7GgNHI7PDgLNRYzGBFNwZPPx6J6M2FsgGDFcyH5ZHMuod4WcNZU3IqxV9
refehXBwC5xrXEbkxFBb3UB5Wf7ekVCh/roVnXBoEjdlSE3b+h9W8MCBUn9AbCgt
WaFr+YaHMq3m2goMPlfLqOGC9tfXSFvNN9AssZIzJaS+zseW9Blf8irb9mFPkE8G
PiGFtfUkxGR2gwKO7P2g
=5w0G
-----END PGP SIGNATURE-----
Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull ARM64 update from Catalin Marinas:
- User tagged pointers support (top 8-bit of user pointers
automatically ignored by the CPU).
- Kernel mode NEON (no users for arm64 yet but work in progress).
- arm64 kernel Image header extended to accommodate future EFI stub.
- Remove BogoMIPS reporting (not relevant, it's just the timer
frequency).
- Clean-up (EM_AARCH64/EM_ARM to elf-em.h, ELF notes in read-only
segment, unused variable).
- Bug-fixes (RAM boundaries not 2MB aligned, perf, includes).
* tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
Documentation/arm64: clarify requirements for DTB placement
arm64: mm: permit use of tagged pointers at EL0
Move the EM_ARM and EM_AARCH64 definitions to uapi/linux/elf-em.h
arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.S
arm64: delay: don't bother reporting bogomips in /proc/cpuinfo
arm64: Fix mapping of memory banks not ending on a PMD_SIZE boundary
arm64: move elf notes into readonly segment
arm64: Enable interrupts in the EL0 undef handler
arm64: Expand arm64 image header
ARM64: include: asm: include "asm/types.h" in "pgtable-2level-types.h" and "pgtable-3level-types.h"
arm64: add support for kernel mode NEON
arm64: perf: fix ARMv8 EVTYPE_MASK to include NSH bit
arm64: perf: fix group validation when using enable_on_exec
TCR.TBI0 can be used to cause hardware address translation to ignore the
top byte of userspace virtual addresses. Whilst not especially useful in
standard C programs, this can be used by JITs to `tag' pointers with
various pieces of metadata.
This patch enables this bit for AArch64 Linux, and adds a new file to
Documentation/arm64/ which describes some potential caveats when using
tagged virtual addresses.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Need include "asm/types.h", just like arm has done, or can not pass
compiling, the related error:
In file included from arch/arm64/include/asm/page.h:37:0,
from drivers/staging/lustre/include/linux/lnet/linux/lib-lnet.h:42,
from drivers/staging/lustre/include/linux/lnet/lib-lnet.h:44,
from drivers/staging/lustre/lnet/lnet/api-ni.c:38:
arch/arm64/include/asm/pgtable-2level-types.h:19:1: error: unknown type name ‘u64
arch/arm64/include/asm/pgtable-2level-types.h:20:1: error: unknown type name ‘u64’
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQIcBAABAgAGBQJSEoS0AAoJEBvWZb6bTYbyE/8P/izUr1XtKBqtVuqVBMewEkeB
SBgvJ2w681f4d+1waVAEPqWVGKHDvSOQC54sG6Y3fWHKijKGLiQhLaY3Y1WaLlO1
B+duREwCwHaApjrpYoKhkyGQVpgyIfHBVe8d2TM9Q2bRuYNZEEcOtfdXk+Qfr6WR
2kN+67ivJzAXvjs0uuRyJtXXq9cemcOnngsAfBlJz+j6UbiEdQ3l569D3wQU1jS2
lUxxCEdtBDKDXkJUbTYvtJNYR48caqVXhYBTjpmY04207iSHmacUytOXO3rRA3OL
fFhm/QeKVZND0XrJDUOMFzosWdUVdP5Qd5PtYoV/gEydNJMMpPs+dFKv+RXzrWlm
2S2PWbFlkFT8yM+xwh6uKnLQ1aj614dkK2vKlp9GwDuwWiaod71C8ouTJvanNHGt
pWgktFlfD+npSc3QDeXG5QB78pTSeyJfZBeVvA+U/etX+vjdfFWZ3bHMScrAE4DX
xsdvtfamo0m9v2yZsnKzRWtCQq9No5FRb/c31w7yUzSXNBtyNR0Vft9gmiLo4HYa
FQ0wC2UPyaKbfYtX0orpWnN3u4vaylGw2HuzK+2Mwi2HL+AMI6Piu//nrTbqb/i+
1a6OARWvv9BQdbcuzBqznUdcllbmRl94kA5zXPvAz0dOBPQFU4X/t6dkxxH+8JFj
mA/c2PHEyOtuFDOoqZxE
=RlHG
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"Fixes for ARM and aarch64.
This pull request is coming a bit later than I would have preferred,
because I and Gleb happened to have holidays around the same weeks of
August... sorry about that"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: ARM: Squash len warning
arm64: KVM: use 'int' instead of 'u32' for variable 'target' in kvm_host.h.
arm64: KVM: add missing dsb before invalidating Stage-2 TLBs
arm64: KVM: perform save/restore of PAR_EL1
arm64: KVM: fix 2-level page tables unmapping
ARM: KVM: Fix unaligned unmap_range leak
ARM: KVM: Fix 64-bit coprocessor handling
* Support for memory mapped arch_timers
* Trivial fixes to the moxart timer code
* Documentation updates
Trivial conflicts in drivers/clocksource/arm_arch_timer.c. Fixed up
the newly added __cpuinit annotations as well.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Add <asm/neon.h> containing kernel_neon_begin/kernel_neon_end function
declarations and corresponding definitions in fpsimd.c
These are needed to wrap uses of NEON in kernel mode. The names are
identical to the ones used in arm/ so code using intrinsics or
vectorized by GCC can be shared between arm and arm64.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ben Tebulin reported:
"Since v3.7.2 on two independent machines a very specific Git
repository fails in 9/10 cases on git-fsck due to an SHA1/memory
failures. This only occurs on a very specific repository and can be
reproduced stably on two independent laptops. Git mailing list ran
out of ideas and for me this looks like some very exotic kernel issue"
and bisected the failure to the backport of commit 53a59fc67f ("mm:
limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").
That commit itself is not actually buggy, but what it does is to make it
much more likely to hit the partial TLB invalidation case, since it
introduces a new case in tlb_next_batch() that previously only ever
happened when running out of memory.
The real bug is that the TLB gather virtual memory range setup is subtly
buggered. It was introduced in commit 597e1c3580 ("mm/mmu_gather:
enable tlb flush range in generic mmu_gather"), and the range handling
was already fixed at least once in commit e6c495a96c ("mm: fix the TLB
range flushed when __tlb_remove_page() runs out of slots"), but that fix
was not complete.
The problem with the TLB gather virtual address range is that it isn't
set up by the initial tlb_gather_mmu() initialization (which didn't get
the TLB range information), but it is set up ad-hoc later by the
functions that actually flush the TLB. And so any such case that forgot
to update the TLB range entries would potentially miss TLB invalidates.
Rather than try to figure out exactly which particular ad-hoc range
setup was missing (I personally suspect it's the hugetlb case in
zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
did), this patch just gets rid of the problem at the source: make the
TLB range information available to tlb_gather_mmu(), and initialize it
when initializing all the other tlb gather fields.
This makes the patch larger, but conceptually much simpler. And the end
result is much more understandable; even if you want to play games with
partial ranges when invalidating the TLB contents in chunks, now the
range information is always there, and anybody who doesn't want to
bother with it won't introduce subtle bugs.
Ben verified that this fixes his problem.
Reported-bisected-and-tested-by: Ben Tebulin <tebulin@googlemail.com>
Build-testing-by: Stephen Rothwell <sfr@canb.auug.org.au>
Build-testing-by: Richard Weinberger <richard.weinberger@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
'target' will be set to '-1' in kvm_arch_vcpu_init(), and it need check
'target' whether less than zero or not in kvm_vcpu_initialized().
So need define target as 'int' instead of 'u32', just like ARM has done.
The related warning:
arch/arm64/kvm/../../../arch/arm/kvm/arm.c:497:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
Signed-off-by: Chen Gang <gang.chen@asianux.com>
[Marc: reformated the Subject line to fit the series]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Not saving PAR_EL1 is an unfortunate oversight. If the guest
performs an AT* operation and gets scheduled out before reading
the result of the translation from PAREL1, it could become
corrupted by another guest or the host.
Saving this register is made slightly more complicated as KVM also
uses it on the permission fault handling path, leading to an ugly
"stash and restore" sequence. Fortunately, this is already a slow
path so we don't really care. Also, Linux doesn't do any AT*
operation, so Linux guests are not impacted by this bug.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
We're going to introduce support to read and write the memory
mapped timer registers in the next patch, so push the cp15
read/write functions one level deeper. This simplifies the next
patch and makes it clearer what's going on.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Using an enum for the register we wish to access allows newer
compilers to determine if we've forgotten a case in our switch
statement. This allows us to remove the BUILD_BUG() instances in
the arm64 port, avoiding problems where optimizations may not
happen.
To try and force better code generation we're currently marking
the accessor functions as inline, but newer compilers can ignore
the inline keyword unless it's marked __always_inline. Luckily on
arm and arm64 inline is __always_inline, but let's make
everything __always_inline to be explicit.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Written by Catalin Marinas, tested by APM on storm platform. This is needed
because of the failures encountered when running SpecWeb benchmark test.
Signed-off-by: Feng Kan <fkan@apm.com>
Acked-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
cached value of __boot_cpu_mode may be incoherent with that in memory.
This could lead to a failure to detect mismatched boot modes.
This patch adds flushing to ensure that writes by secondaries to
__boot_cpu_mode are made visible before we test against it.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Fixes (user cache maintenance fault handling, !COMPAT compilation, CPU
online and interrupt hanlding).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJR6WGUAAoJEGvWsS0AyF7xaMQP/3WSpcEUQ+k8wCbjzkbpnQJ3
Z0Ufz0XeBUgmaZNwYFjnpVTm/R04F1gcsoA5qE//6iMkbcpbM2sWVN/uQvY3l+fQ
haJmCWZ7PIXxm3vbKrcSBiJ+WKpvUkzlL+Q1upmQWFCxkP6noejCsvazM5lPgt1q
1Vr5/9q87TOfxG8Udki3pPqRazd4YwQJx6JCZ46P+mlqQk9gxDoQ7RRXy9Viv+SV
6Jq5Lt8Qj7iXmq8DDTYdF7DHE7rnIhdCUhJu7UN1G/O7U4s9nlUbv55fHMY76ZAn
BddgNtIsMsvfIxlYZ6f0r2ccrgPHaDTLVW/q0VkwdP8jD2EQWAK7gI21OgQm0ebl
OL+t29zdx9nidpI+cCTlwejh8i7vRsoxgYki5qfnYR3SHL5HhQSHrUTZvEFr+u4I
ceXnDmTZ46HmPSCC6/5cFiXxsw1zbBxSB7rNFvXmF2Jr7F3TvAxCWvrIfmrmYdrC
bw4UMBB15SaJud3maqVGhj6aVo4bEand4g++Dk0ytXbGq+5Ke5CktmPOmzNLVu9p
R8tHDyp9szFP0eqqF1/XTZx2rsHvop7D1QUpQVgFC+mRWI3gjtFPtShW1GgbQQ+P
t7gvHJQ2SF1BRTSf6KH/cqXDgHzpg3VuP2moM7YbR0/qvRf7Eh/M3Hrg/s6xCeKr
ywmW7/+McIz3kDsk1gUM
=M4kj
-----END PGP SIGNATURE-----
Merge tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 fixes from Catalin Marinas:
- Post -rc1 update to the common reboot infrastructure.
- Fixes (user cache maintenance fault handling, !COMPAT compilation,
CPU online and interrupt hanlding).
* tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: use common reboot infrastructure
arm64: mm: don't treat user cache maintenance faults as writes
arm64: add '#ifdef CONFIG_COMPAT' for aarch32_break_handler()
arm64: Only enable local interrupts after the CPU is marked online
Commit 7b6d864b48 (reboot: arm: change reboot_mode to use enum
reboot_mode) changed the way reboot is handled on arm, which has a
direct impact on arm64 as we share the reset driver on the VE platform.
The obvious fix is to move arm64 to use the same infrastructure.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: removed reboot_mode = REBOOT_HARD default setting]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If 'COMPAT' not defined, aarch32_break_handler() cannot pass compiling,
and it can work independent with 'COMPAT', so remove dummy definition.
The related error:
arch/arm64/kernel/debug-monitors.c:249:5: error: redefinition of ‘aarch32_break_handler’
In file included from arch/arm64/kernel/debug-monitors.c:29:0:
/root/linux-next/arch/arm64/include/asm/debug-monitors.h:89:12: note: previous definition of ‘aarch32_break_handler’ was here
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/arm64 uses of the __cpuinit macros from
all C files. Currently arm64 does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJR0bZAAAoJEGvWsS0AyF7xTEEP/R/aRoqWwbVAMlwAhujq616O
t4RzIyBXZXqxS9I+raokCX4mgYxdeisJlzN2hoq73VEX2BQlXZoYh8vmfY9WeNSM
2pdfif2HF7oo9ymCRyqfuhbumPrTyJhpbguzOYrxPqpp2f1hv2D8hbUJEFj429yL
UjqTFoONngfouZmAlwrPGZQKhBI95vvN53yvDMH0PWfvpm07DKGIQMYp20y0pj8j
slhLH3bh2kfpS1cf23JtH6IICwWD2pXW0POo569CfZry6bI74xve+Trcsm7iPnsO
PSI1P046ME1mu3SBbKwiPIdN/FQqWwTHW07fvMmH/xuXu3Zs/mxgzi7vDzDrVvTg
PJSbKWD6N/IPPwKS/gCUmWWDASO0bXx3KlDuRZqAjbRojs0UPUOTUhzJM/BHUms1
vY2QS9lAm02LmZZrk1LeKKP85gB+qKQvHuOVhIOldWeLGKtsNufz1kynz6YTqsLq
uUB55KwbhQ7q8+aoY6lWujqiTXMoLkBgGdjHs2I407PAv7ZjlhRWk2fIry7xJifp
rKu2cIlWsRe4CGvGI410NvIJFrGvJAV4wA43sgBDjPumyILgT/5jw9r3RpJEBZZs
akw/Bl1CbL+gMjyoPUWgcWZdRkUCE0eLrgyMOmaYfst8cOTaWw4dWLvUG/bBZg+Y
mGnuEQUQtAPadk8P/Sv3
=PZ/e
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull ARM64 updates from Catalin Marinas:
"Main features:
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
For arm64 huge pages support, there are x86 changes moving part of
arch/x86/mm/hugetlbpage.c into mm/hugetlb.c to be re-used by arm64"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (66 commits)
arm64: Add initial DTS for APM X-Gene Storm SOC and APM Mustang board
arm64: Add defines for APM ARMv8 implementation
arm64: Enable APM X-Gene SOC family in the defconfig
arm64: Add Kconfig option for APM X-Gene SOC family
arm64/Makefile: provide vdso_install target
ARM64: mm: THP support.
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
ARM64: mm: HugeTLB support.
ARM64: mm: Move PTE_PROT_NONE bit.
ARM64: mm: Make PAGE_NONE pages read only and no-execute.
ARM64: mm: Restore memblock limit when map_mem finished.
mm: thp: Correct the HPAGE_PMD_ORDER check.
x86: mm: Remove general hugetlb code from x86.
mm: hugetlb: Copy general hugetlb code from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
arm64: KVM: document kernel object mappings in HYP
arm64: KVM: MAINTAINERS update
arm64: KVM: userspace API documentation
arm64: KVM: enable initialization of a 32bit vcpu
...
Pull ARM updates from Russell King:
"This contains the usual updates from other people (listed below) and
the usual random muddle of miscellaneous ARM updates which cover some
low priority bug fixes and performance improvements.
I've started to put the pull request wording into the merge commits,
which are:
- NoMMU stuff:
This includes the following series sent earlier to the list:
- nommu-fixes
- R7 Support
- MPU support
I've left out the ARCH_MULTIPLATFORM/!MMU stuff that Arnd and I
were discussing today until we've reached a conclusion/that's had
some more review.
This is rebased (and re-tested) on your devel-stable branch because
otherwise there were going to be conflicts with Uwe's V7M work now
that you've merged that. I've included the fix for limiting MPU to
CPU_V7.
- Huge page support
These changes bring both HugeTLB support and Transparent HugePage
(THP) support to ARM. Only long descriptors (LPAE) are supported
in this series.
The code has been tested on an Arndale board (Exynos 5250).
- LPAE updates
Please pull these miscellaneous LPAE fixes I've been collecting for
a while now for 3.11. They've been tested and reviewed by quite a
few people, and most of the patches are pretty trivial. -- Will Deacon.
- arch_timer cleanups
Please pull these arch_timer cleanups I've been holding onto for a
while. They're the same as my last posting, but have been rebased
to v3.10-rc3.
- mpidr linearisation (multiprocessor id register - identifies which
CPU number we are in the system)
This patch series that implements MPIDR linearization through a
simple hashing algorithm and updates current cpu_{suspend}/{resume}
code to use the newly created hash structures to retrieve context
pointers. It represents a stepping stone for the implementation of
power management code on forthcoming multi-cluster ARM systems.
It has been tested on TC2 (dual cluster A15xA7 system), iMX6q,
OMAP4 and Tegra, with processors hitting low-power states requiring
warm-boot resume through the cpu_resume code path"
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits)
ARM: 7775/1: mm: Remove do_sect_fault from LPAE code
ARM: 7777/1: Avoid extra calls to the C compiler
ARM: 7774/1: Fix dtb dependency to use order-only prerequisites
ARM: 7770/1: remove residual ARMv2 support from decompressor
ARM: 7769/1: Cortex-A15: fix erratum 798181 implementation
ARM: 7768/1: prevent risks of out-of-bound access in ASID allocator
ARM: 7767/1: let the ASID allocator handle suspended animation
ARM: 7766/1: versatile: don't mark pen as __INIT
ARM: 7765/1: perf: Record the user-mode PC in the call chain.
ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork
ARM: kernel: implement stack pointer save array through MPIDR hashing
ARM: kernel: build MPIDR hash function data structure
ARM: mpu: Ensure that MPU depends on CPU_V7
ARM: mpu: protect the vectors page with an MPU region
ARM: mpu: Allow enabling of the MPU via kconfig
ARM: 7758/1: introduce config HAS_BANDGAP
ARM: 7757/1: mm: don't flush icache in switch_mm with hardware broadcasting
ARM: 7751/1: zImage: don't overwrite ourself with a page table
ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lock
ARM: 7748/1: oabi: handle faults when loading swi instruction from userspace
...
Pull voluntary preemption fixes from Ingo Molnar:
"This tree contains a speedup which is achieved through better
might_sleep()/might_fault() preemption point annotations for uaccess
functions, by Michael S Tsirkin:
1. The only reason uaccess routines might sleep is if they fault.
Make this explicit for all architectures.
2. A voluntary preemption point in uaccess functions means compiler
can't inline them efficiently, this breaks assumptions that they
are very fast and small that e.g. net code seems to make. Remove
this preemption point so behaviour matches with what callers
assume.
3. Accesses (e.g through socket ops) to kernel memory with KERNEL_DS
like net/sunrpc does will never sleep. Remove an unconditinal
might_sleep() in the might_fault() inline in kernel.h (used when
PROVE_LOCKING is not set).
4. Accesses with pagefault_disable() return EFAULT but won't cause
caller to sleep. Check for that and thus avoid might_sleep() when
PROVE_LOCKING is set.
These changes offer a nice speedup for CONFIG_PREEMPT_VOLUNTARY=y
kernels, here's a network bandwidth measurement between a virtual
machine and the host:
before:
incoming: 7122.77 Mb/s
outgoing: 8480.37 Mb/s
after:
incoming: 8619.24 Mb/s [ +21.0% ]
outgoing: 9455.42 Mb/s [ +11.5% ]
I kept these changes in a separate tree, separate from scheduler
changes, because it's a mixed MM and scheduler topic"
* 'sched-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
mm, sched: Allow uaccess in atomic with pagefault_disable()
mm, sched: Drop voluntary schedule from might_fault()
x86: uaccess s/might_sleep/might_fault/
tile: uaccess s/might_sleep/might_fault/
powerpc: uaccess s/might_sleep/might_fault/
mn10300: uaccess s/might_sleep/might_fault/
microblaze: uaccess s/might_sleep/might_fault/
m32r: uaccess s/might_sleep/might_fault/
frv: uaccess s/might_sleep/might_fault/
arm64: uaccess s/might_sleep/might_fault/
asm-generic: uaccess s/might_sleep/might_fault/
* 'for-next/hugepages' of git://git.linaro.org/people/stevecapper/linux:
ARM64: mm: THP support.
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
ARM64: mm: HugeTLB support.
ARM64: mm: Move PTE_PROT_NONE bit.
ARM64: mm: Make PAGE_NONE pages read only and no-execute.
ARM64: mm: Restore memblock limit when map_mem finished.
mm: thp: Correct the HPAGE_PMD_ORDER check.
x86: mm: Remove general hugetlb code from x86.
mm: hugetlb: Copy general hugetlb code from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/pgtable-hwdef.h
arch/arm64/include/asm/pgtable.h
This patch adds defines for APM CPU implementer ID and APM CPU part numbers in asm/cputype.h
Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bring Transparent HugePage support to ARM. The size of a
transparent huge page depends on the normal page size. A
transparent huge page is always represented as a pmd.
If PAGE_SIZE is 4KB, THPs are 2MB.
If PAGE_SIZE is 64KB, THPs are 512MB.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Add huge page support to ARM64, different huge page sizes are
supported depending on the size of normal pages:
PAGE_SIZE is 4KB:
2MB - (pmds) these can be allocated at any time.
1024MB - (puds) usually allocated on bootup with the command line
with something like: hugepagesz=1G hugepages=6
PAGE_SIZE is 64KB:
512MB - (pmds) usually allocated on bootup via command line.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Under ARM64, PTEs can be broadly categorised as follows:
- Present and valid: Bit #0 is set. The PTE is valid and memory
access to the region may fault.
- Present and invalid: Bit #0 is clear and bit #1 is set.
Represents present memory with PROT_NONE protection. The PTE
is an invalid entry, and the user fault handler will raise a
SIGSEGV.
- Not present (file or swap): Bits #0 and #1 are clear.
Memory represented has been paged out. The PTE is an invalid
entry, and the fault handler will try and re-populate the
memory where necessary.
Huge PTEs are block descriptors that have bit #1 clear. If we wish
to represent PROT_NONE huge PTEs we then run into a problem as
there is no way to distinguish between regular and huge PTEs if we
set bit #1.
To resolve this ambiguity this patch moves PTE_PROT_NONE from
bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The
number of swap/file bits is reduced by 1 as a consequence, leaving
60 bits for file and swap entries.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
If we consider the following code sequence:
my_pte = pte_modify(entry, myprot);
x = pte_write(my_pte);
y = pte_exec(my_pte);
If myprot comes from a PROT_NONE page, then x and y will both be
true which is undesireable behaviour.
This patch sets the no-execute and read-only bits for PAGE_NONE
such that the code above will return false for both x and y.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
and providing sensible defaults out of reset state.
This feature is of course conditioned by the presence of 32bit
capability on the physical CPU, and is checked by the KVM_CAP_ARM_EL1_32BIT
capability.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Provide the necessary infrastructure to trap coprocessor accesses that
occur when running 32bit guests.
Also wire SMC and HVC trapped in 32bit mode while were at it.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
As conditional instructions can trap on AArch32, add the thinest
possible emulation layer to keep 32bit guests happy.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Allow access to the 32bit register file through the usual API.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Define the 32bit specific registers (SPSRs, cp15...).
Most CPU registers are directly mapped to a 64bit register
(r0->x0...). Only the SPSRs have separate registers.
cp15 registers are also mapped into their 64bit counterpart in most
cases.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Wire the PSCI backend into the exit handling code.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>