Commit Graph

534218 Commits

Author SHA1 Message Date
Will Deacon
d8d23fa0f2 arm64: mdscr_el1: avoid exposing DCC to userspace
We don't want to expose the DCC to userspace, particularly as there is
a kernel console driver for it.

This patch resets mdscr_el1 to disable userspace access to the DCC
registers on the cold boot path.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-20 16:17:58 +01:00
Jeff Vander Stoep
bf0c4e0473 arm64: kconfig: Move LIST_POISON to a safe value
Move the poison pointer offset to 0xdead000000000000, a
recognized value that is not mappable by user-space exploits.

Cc: <stable@vger.kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Thierry Strudel <tstrudel@google.com>
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-19 17:10:21 +01:00
Jungseok Lee
9a5ad7d0e3 arm64: Add __exception_irq_entry definition for function graph
The gic_handle_irq() is defined with __exception_irq_entry attribute.
A single remaining work is to add its definition as ARM did. Below
shows how function graph data is changed with these hunks.

A prologue of an interrupt handler is drawn as follows.

- current status

 0)   0.208 us    |  cpuidle_not_available();
 0)               |  default_idle_call() {
 0)               |    arch_cpu_idle() {
 0)               |      __handle_domain_irq() {
 0)               |        irq_enter() {
 0)   0.313 us    |          rcu_irq_enter();
 0)   0.261 us    |          __local_bh_disable_ip();

- with this change

 0)   0.625 us    |  cpuidle_not_available();
 0)               |  default_idle_call() {
 0)               |    arch_cpu_idle() {
 0)   ==========> |
 0)               |      gic_handle_irq() {
 0)               |        __handle_domain_irq() {
 0)               |          irq_enter() {
 0)   0.885 us    |            rcu_irq_enter();
 0)   0.781 us    |            __local_bh_disable_ip();

An epilogue of an interrupt handler is recorded as follows.

- current status

 0)   0.261 us    |          idle_cpu();
 0)               |          rcu_irq_exit() {
 0)   0.521 us    |            rcu_eqs_enter_common.isra.46();
 0)   2.552 us    |          }
 0) ! 322.448 us  |        }
 0) ! 583.437 us  |      }
 0) # 1656.041 us |    }
 0) # 1658.073 us |  }

- with this change

 0)   0.677 us    |            idle_cpu();
 0)               |            rcu_irq_exit() {
 0)   1.770 us    |              rcu_eqs_enter_common.isra.46();
 0)   7.968 us    |            }
 0) # 1803.541 us |          }
 0) # 2626.667 us |        }
 0) # 2632.969 us |      }
 0)   <========== |
 0) # 14425.00 us |    }
 0) # 14430.98 us |  }

Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Rabin Vincent <rabin@rab.in>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-12 17:23:45 +01:00
Will Deacon
d422e62562 Merge branch 'aarch64/psci/drivers' into aarch64/for-next/core
Move our PSCI implementation out into drivers/firmware/ where it can be
shared with arch/arm/.

Conflicts:
	arch/arm64/kernel/psci.c
2015-08-05 14:14:06 +01:00
Will Deacon
8ec4198743 arm64: mm: ensure patched kernel text is fetched from PoU
The arm64 booting document requires that the bootloader has cleaned the
kernel image to the PoC. However, when a CPU re-enters the kernel due to
either a CPU hotplug "on" event or resuming from a low-power state (e.g.
cpuidle), the kernel text may in-fact be dirty at the PoU due to things
like alternative patching or even module loading.

Thanks to I-cache speculation with the MMU off, stale instructions could
be fetched prior to enabling the MMU, potentially leading to crashes
when executing regions of code that have been modified at runtime.

This patch addresses the issue by ensuring that the local I-cache is
invalidated immediately after a CPU has enabled its MMU but before
jumping out of the identity mapping. Any stale instructions fetched from
the PoC will then be discarded and refetched correctly from the PoU.
Patching kernel text executed prior to the MMU being enabled is
prohibited, so the early entry code will always be clean.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-05 10:05:20 +01:00
Will Deacon
04b8637be9 arm64: alternatives: ensure secondary CPUs execute ISB after patching
In order to guarantee that the patched instruction stream is visible to
a CPU, that CPU must execute an isb instruction after any related cache
maintenance has completed.

The instruction patching routines in kernel/insn.c get this right for
things like jump labels and ftrace, but the alternatives patching omits
it entirely leaving secondary cores in a potential limbo between the old
and the new code.

This patch adds an isb following the secondary polling loop in the
altenatives patching.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-04 18:52:09 +01:00
Mark Rutland
7f08a414f2 arm64: make ll/sc __cmpxchg_case_##name asm consistent
The ll/sc __cmpxchg_case_##name assembly mostly uses symbolic names for
operands, but in a single case uses %2 to refer to what is otherwise
known as %[v]. This makes the code more painful to read than is
necessary.

Use %[v] instead.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-04 18:51:04 +01:00
Robin Murphy
97942c2862 arm64: dma-mapping: Simplify pgprot handling
Since __get_dma_pgprot() does The Right Thing(TM) in the non-coherent
case, and the non-cacheable alias for DMA buffers is private to the
kernel anyway, we can simplify things slightly and make the code more
readable by just using PAGE_KERNEL as the base pgprot.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-03 13:17:38 +01:00
Mark Rutland
514f161abc MAINTAINERS: add PSCI entry
Add myself and Lorenzo as maintainers of the PSCI client code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-03 12:35:42 +01:00
Mark Rutland
5211df00a4 drivers: psci: support native SMC{32,64} calls
A 32-bit OS cannot make calls with SMC64 IDs, while a 64-bit OS must
invoke some PSCI functions with SMC64 IDs.

This patch introduces and makes use of a new macro to choose the
appropriate IDs based on the register width of the OS, which will allow
32-bit callers to use the PSCI client code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-03 12:35:00 +01:00
Mark Rutland
bff60792f9 arm64: psci: factor invocation code to drivers
To enable sharing with arm, move the core PSCI framework code to
drivers/firmware. This results in a minor gain in lines of code, but
this will quickly be amortised by the removal of code currently
duplicated in arch/arm.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-08-03 12:33:39 +01:00
Sudeep Holla
b511a65928 arm64: restore cpu suspend/resume functionality
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
accidentally retained code for !CONFIG_SMP in cpu_resume function. This
resulted in the hash index being zeroed in x7 after proper computation,
which is then used to get the cpu context pointer while resuming.

This patch removes the remanant code and restores back the cpu suspend/
resume functionality.

Fixes: 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-31 17:40:52 +01:00
Lorenzo Pieralisi
72407514c9 ARM64: PCI: do not enable resources on PROBE_ONLY systems
On ARM64 PROBE_ONLY PCI systems resources are not currently claimed,
therefore they can't be enabled since they do not have a valid
parent pointer; this in turn prevents enabling PCI devices on
ARM64 PROBE_ONLY systems, causing PCI devices initialization to
fail.

To solve this issue, resources must be claimed when devices are
added on PROBE_ONLY systems, which ensures that the resource hierarchy
is validated and the resource tree is sane, but this requires changes
in the ARM64 resource management that can affect adversely existing
PCI set-ups (claiming resources on !PROBE_ONLY systems might break
existing ARM64 PCI platform implementations).

As a temporary solution in preparation for a proper resources claiming
implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64,
this patch adds a pcibios_enable_device() arch implementation that
simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM
behaviour).

This is always a safe thing to do because on PROBE_ONLY systems the
configuration space set-up can be considered immutable, and it is in
preparation of proper resource claiming that would finally validate
the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY
systems.

For !PROBE_ONLY systems resources enablement in pcibios_enable_device()
on ARM64 is implemented as in current PCI core, leaving the behaviour
unchanged.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30 20:17:07 +01:00
Will Deacon
a14949e09a arm64: cmpxchg: truncate sub-word signed types before comparison
When performing a cmpxchg operation on a signed sub-word type (e.g. s8),
we need to ensure that the upper register bits of the "old" value used
for comparison are zeroed, otherwise we may erroneously fail the cmpxchg
which may even be interpreted as success by the caller (if the compiler
performs the truncation as part of its check). This has been observed
in mod_state, where negative values where causing problems with
this_cpu_cmpxchg.

This patch fixes the issue by explicitly casting 8-bit and 16-bit "old"
values using unsigned types in our cmpxchg wrappers. 32-bit types can be
left alone, since the underlying asm makes use of W registers in this
case.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30 20:16:53 +01:00
Will Deacon
ef5e724b25 arm64: alternative: put secondary CPUs into polling loop during patch
When patching the kernel text with alternatives, we may end up patching
parts of the stop_machine state machine (e.g. atomic_dec_and_test in
ack_state) and consequently corrupt the instruction stream of any
secondary CPUs.

This patch passes the cpu_online_mask to stop_machine, forcing all of
the CPUs into our own callback which can place the secondary cores into
a dumb (but safe!) polling loop whilst the patching is carried out.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30 19:07:28 +01:00
Ard Biesheuvel
6c020ea8dc arm64/Documentation: clarify wording regarding memory below the Image
Clarify that the memory below the start of the image but inside the
region covered by the linear mapping has no special significance to
the kernel, and may be used by the firmware provided that it is marked
as reserved.

Also, fix up some whitespace errors.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:10 +01:00
Will Deacon
484c96dbb2 arm64: lse: fix lse cmpxchg code indentation
For some reason, the ll/sc cmpxchg asm is all off to the left and
awkward to read in conjunction with the following (correctly indented)
LSE version.

This patch shifts the ll/sc code back to where it should be.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Jonas Rabenstein
63a581865e arm64: remove redundant object file list
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
#ifdefs") forces SMP on arm64. To build the necessary objects for SMP,
they were added to the arm64-obj-y rule in arch/arm64/kernel/Makefile,
without removing the arm64-obj-$(CONFIG_SMP) rule.

Remove redundant object file list depending on always-yes CONFIG_SMP in
arch/arm64/kernel/Makefile.

Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Jonas Rabenstein
377bcff9a3 arm64: remove dead-code depending on CONFIG_UP_LATE_INIT
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
and therfore can not be selected anymore.

Remove dead #ifdef-block depending on UP_LATE_INIT in
arch/arm64/kernel/setup.c

Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
[will: kill do_post_cpus_up_work altogether]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Will Deacon
766ffb6980 arm64: pgtable: fix definition of pte_valid
pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte,
so fix the macro definition to use bitwise & instead of logical &&.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 16:14:03 +01:00
Will Deacon
c1d7cd228b arm64: spinlock: fix ll/sc unlock on big-endian systems
When unlocking a spinlock, we perform a read-modify-write on the owner
ticket in order to increment it and store it back with release
semantics.

In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
therefore store back the wrong halfword on a big-endian system,
corrupting the lock after the first unlock and killing the system dead.

This patch fixes the unlock code to use 16-bit accessors consistently.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 14:48:00 +01:00
Catalin Marinas
4150e50bf5 arm64: Use last level TLBI for user pte changes
The flush_tlb_page() function is used on user address ranges when PTEs
(or PMDs/PUDs for huge pages) were changed (attributes or clearing). For
such cases, it is more efficient to invalidate only the last level of
the TLB with the "tlbi vale1is" instruction.

In the TLB shoot-down case, the TLB caching of the intermediate page
table levels (pmd, pud, pgd) is handled by __flush_tlb_pgtable() via the
__(pte|pmd|pud)_free_tlb() functions and it is not deferred to
tlb_finish_mmu() (as of commit 285994a62c - "arm64: Invalidate the TLB
corresponding to intermediate page table levels"). The tlb_flush()
function only needs to invalidate the TLB for the last level of page
tables; the __flush_tlb_range() function gains a fourth argument for
last level TLBI.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 11:44:01 +01:00
Catalin Marinas
da4e73303e arm64: Clean up __flush_tlb(_kernel)_range functions
This patch moves the MAX_TLB_RANGE check into the
flush_tlb(_kernel)_range functions directly to avoid the
undescore-prefixed definitions (and for consistency with a subsequent
patch).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 11:43:15 +01:00
Mark Rutland
c53e0baa6f arm64: mm: mark create_mapping as __init
Currently create_mapping is marked with __ref, apparently because it
refers to early_alloc. However, create_mapping has no logic to prevent
erroneous use of early_alloc after it has been freed, and is only ever
called by __init functions anyway. Thus the __ref marker is misleading
and unnecessary.

Instead, this patch marks create_mapping as __init, resulting in
warnings if it is used from a a non __init functions, and allowing its
memory to be reclaimed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 11:36:09 +01:00
Will Deacon
6f883d10a1 arm64: debug: rename enum debug_el to avoid symbol collision
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
a contraction of "element". This conflicts with 'enum debug_el' in our
asm/debug-monitors.h header file, where "el" stands for Exception Level.

The result is build failure when targetting allmodconfig, so rename our
enum to 'dbg_active_el' to be slightly more explicit about what it is.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 18:36:54 +01:00
Wang Long
662ba3dbce arm64: mm: add __init section marker to free_initrd_mem
It is not needed after booting, this patch moves the
free_initrd_mem() function to the __init section.

This patch also make keep_initrd __initdata, to reduce kernel
size.

Signed-off-by: Wang Long <long.wanglong@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 18:29:18 +01:00
Will Deacon
309585b0b9 arm64: elf: use cpuid_feature_extract_field for hwcap detection
cpuid_feature_extract_field takes care of the fiddly ID register
field sign-extension, so use that instead of rolling our own version.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 16:56:17 +01:00
Will Deacon
2e94da1379 arm64: lse: use generic cpufeature detection for LSE atomics
Rework the cpufeature detection to support ISAR0 and use that for
detecting the presence of LSE atomics.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 16:37:14 +01:00
Will Deacon
0e4a07092f arm64: kconfig: group the v8.1 features together
ARMv8 CPUs do not support any of the v8.1 features, so group them
together in Kconfig to make it clear that they're part of 8.1 and not
relevant to older cores.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:54:13 +01:00
Will Deacon
c739dc83a0 arm64: lse: rename ARM64_CPU_FEAT_LSE_ATOMICS for consistency
Other CPU features follow an 'ARM64_HAS_*' naming scheme, so do the same
for the LSE atomics.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon
95eff6b27c arm64: kconfig: select HAVE_CMPXCHG_LOCAL
We implement an optimised cmpxchg_local macro, so let the kernel know.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon
db26217e6f arm64: atomic64_dec_if_positive: fix incorrect branch condition
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow
and incorrectly decide that the original parameter was positive.

This patches fixes the broken condition code so that we handle this
corner case correctly.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon
6059a7b6e8 arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchg
We don't need duplicate cmpxchg implementations, so use cmpxchg to
implement atomic{,64}_cmpxchg, like we do for xchg already.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon
0ea366f5e1 arm64: atomics: prefetch the destination word for write prior to stxr
The cost of changing a cacheline from shared to exclusive state can be
significant, especially when this is triggered by an exclusive store,
since it may result in having to retry the transaction.

This patch makes use of prfm to prefetch cachelines for write prior to
ldxr/stxr loops when using the ll/sc atomic routines.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon
a82e62382f arm64: atomics: tidy up common atomic{,64}_* macros
The common (i.e. identical for ll/sc and lse) atomic macros in atomic.h
are needlessley different for atomic_t and atomic64_t.

This patch tidies up the definitions to make them consistent across the
two atomic types and factors out common code such as the add_unless
implementation based on cmpxchg.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon
4e39715f4b arm64: cmpxchg: avoid memory barrier on comparison failure
cmpxchg doesn't require memory barrier semantics when the value
comparison fails, so make the barrier conditional on success.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon
0bc671d3f4 arm64: cmpxchg: avoid "cc" clobber in ll/sc routines
We can perform the cmpxchg comparison using eor and cbnz which avoids
the "cc" clobber for the ll/sc case and consequently for the LSE case
where we may have to fall-back on the ll/sc code at runtime.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon
e9a4b79565 arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg_double primitives
so that the LSE casp instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon
c342f78217 arm64: cmpxchg: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg primitives so that
the LSE cas instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon
c8366ba0fb arm64: xchg: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our xchg primitives so that
the LSE swp instruction (yes, you read right!) is used instead.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon
084f903727 arm64: bitops: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our bitops functions so that
LSE atomic instructions are used instead.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon
81bb5c6420 arm64: locks: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our locking functions so that
LSE atomic instructions are used for spinlocks and rwlocks.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon
c09d6a04d1 arm64: atomics: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of atomic_t and atomic64_t
routines so that the call-site for the out-of-line ll/sc sequences is
patched with an LSE atomic instruction when we detect that
the CPU supports it.

If binutils is not recent enough to assemble the LSE instructions, then
the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00
Will Deacon
c0385b24af arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics
In order to patch in the new atomic instructions at runtime, we need to
generate wrappers around the out-of-line exclusive load/store atomics.

This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which
causes our atomic functions to branch to the out-of-line ll/sc
implementations. To avoid the register spill overhead of the PCS, the
out-of-line functions are compiled with specific compiler flags to
force out-of-line save/restore of any registers that are usually
caller-saved.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00
Will Deacon
d964b7229e arm64: alternatives: add cpu feature for lse atomics
Add a CPU feature for the LSE atomic instructions, so that they can be
patched in at runtime when we detect that they are supported.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:34:39 +01:00
Will Deacon
40a1db2434 arm64: elf: advertise 8.1 atomic instructions as new hwcap
The ARM v8.1 architecture introduces new atomic instructions to the A64
instruction set for things like cmpxchg, so advertise their availability
to userspace using a hwcap.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:34:39 +01:00
Will Deacon
c275f76bb4 arm64: atomics: move ll/sc atomics into separate header file
In preparation for the Large System Extension (LSE) atomic instructions
introduced by ARM v8.1, move the current exclusive load/store (LL/SC)
atomics into their own header file.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:34:39 +01:00
Will Deacon
144e9697a9 arm64: cpufeature.h: add missing #include of kernel.h
cpufeature.h makes use of DECLARE_BITMAP, which in turn relies on the
BITS_TO_LONGS and DIV_ROUND_UP macros.

This patch includes kernel.h in cpufeature.h to prevent all users having
to do the same thing.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:26:34 +01:00
Will Deacon
9511ca19da arm64: rwlocks: don't fail trylock purely due to contention
STXR can fail for a number of reasons, so don't fail an rwlock trylock
operation simply because the STXR reported failure.

I'm not aware of any issues with the current code, but this makes it
consistent with spin_trylock and also other architectures (e.g. arch/arm).

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:26:34 +01:00
Will Deacon
fc9eb93cd4 Merge branch 'locking/arch-atomic' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into aarch64/for-next/core
Merge in PeterZ's logical atomic ops so that we can implement them in
our subsequent LSE atomics.
2015-07-27 14:21:15 +01:00