On CPUs with virtualization extensions the kernel installs HYP mode
configuration on both primary and secondary cpus upon cold boot.
On platforms where CPUs are shutdown in idle paths (ie CPU core gating),
when a CPU resumes from low-power states it currently does not execute
code that reinstalls the HYP configuration, which means that the kernel
cannot run eg KVM properly on such machines.
This patch, mirroring cold-boot behaviour, executes position independent
code that reinstalls HYP configuration and drops to SVC mode safely on
warmboot, so that deep idle states can be enabled in kernel running as
hosts on platforms with power management HW.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti
(which also gets the current thread_info) instead of open-coding them in
arch/arm/vfp/*.S files.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
asm/assembler.h is a better place for this macro since it is used by
asm files outside arch/arm/kernel/
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Renames logical shift macros, 'push' and 'pull', defined in
arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'.
That eliminates name conflict between 'push' logical shift macro
and 'push' instruction mnemonic. That allows assembler.h to be
included in .S files that use 'push' instruction.
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add ARM_BE8() helper to wrap any code conditional on being
compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert
existing places where this is to use it.
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
On ARMv7, the memory barrier instructions take an optional `option'
field which can be used to constrain the effects of a memory barrier
based on shareability and access type.
This patch allows the caller to pass these options if required, and
updates the smp_*() barriers to request inner-shareable barriers,
affecting only stores for the _wmb variant. wmb() is also changed to
use the -st version of dsb.
Reported-by: Albin Tonnerre <albin.tonnerre@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds the base support for the ARMv7-M
architecture. It consists of the corresponding arch/arm/mm/ files and
various #ifdef's around the kernel. Exception handling is implemented by
a subsequent patch.
[ukleinek: squash in some changes originating from commit
b5717ba (Cortex-M3: Add support for the Microcontroller Prototyping System)
from the v2.6.33-arm1 patch stack, port to post 3.6, drop zImage
support, drop reorganisation of pt_regs, assert CONFIG_CPU_V7M doesn't
leak into installed headers and a few cosmetic changes]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Jonathan Austin <jonathan.austin@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
The safe_svcmode_maskall macro is used to ensure that we are running in
svc mode, causing an exception return from hvc mode if required.
This patch removes the unneeded lr clobber from the macro and operates
entirely on the temporary parameter register instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[will: updated comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The kernel can only be entered on HYP mode on CPUs which actually
support it, i.e. >= ARMv7. pre-v6 platform support cannot coexist
in the same kernel as support for v7 and higher, so there is no
advantage in having the HYP mode check on pre-v6 hardware.
At least one pre-v6 board is known to fail when the HYP mode check
code is present, although the exact cause remains unknown and may
be unrelated. [1]
This patch restores the old behaviour for pre-v6 platforms, whereby
the CPSR is forced directly to SVC mode with IRQs and FIQs masked.
All kernels capable of booting on v7 hardware will retain the
check, so this should not impair functionality.
[1] http://lists.arm.linux.org.uk/lurker/message/20121130.013814.19218413.en.html
([ARM] head.S change broke platform device registration?)
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It appears that performing a "movs pc, lr" to force the kernel into
SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting
correctly (change introduced in 80c59da [ARM: virt: allow the kernel
to be entered in HYP mode]).
While the reason it fails is not understood yet (the same code runs
fine on the OMAP2430, ARM1136 as well), partially revert that change
for platforms that do not enter in HYP mode, preserving the new
feature and restoring a working kernel on the OMAP2420.
Reported-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch does two things:
* Ensure that asynchronous aborts are masked at kernel entry.
The bootloader should be masking these anyway, but this reduces
the damage window just in case it doesn't.
* Enter svc mode via exception return to ensure that CPU state is
properly serialised. This does not matter when switching from
an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
parlance), but it potentially does matter when switching from a
another privileged mode such as hyp mode.
This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The {get,put}_user macros don't perform range checking on the provided
__user address when !CPU_HAS_DOMAINS.
This patch reworks the out-of-line assembly accessors to check the user
address against a specified limit, returning -EFAULT if is is out of
range.
[will: changed get_user register allocation to match put_user]
[rmk: fixed building on older ARM architectures]
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rob Herring has done a sweeping change cleaning up all of the mach/io.h includes,
moving some of the oft-repeated macros to a common location and removing a bunch of
boiler plate. This is another step closer to a common zImage for multiple platforms.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPcpqHAAoJEIwa5zzehBx3xCMP/2evrPQyorzMBztrFB4Ry9Ol
qNkSVNsemZjdtkY2dnJv+zJ/Xb0PPDU9EuBHr/SpqmVrRZEZeJND42wZK/OTFCBZ
Ufi7KP1qE30daO5H3YmL+58/Ixir5fTHqggqolHhTcEYU2hnHgLBI4rIFu92kSO7
TMyrAUs14jSkTVZc6HSF83w3PfQWhMzWvspJVHQ6RebZRruETAr7v9weVMbgxcDk
jQ5XJ9y73rGs2AF8bZTpUdFPzkcac7UiHn3/XyqoZs8RNCL98BGpskzhILyTARf5
X90c9mqQF+AEbb9QSDDd52uYFsJ/5COJvWdlExRI9gZZDI8Pd05ijZBR9IdGJg/B
NsVsl98wvZ/zjHJ/Sb2qt5ruet7PiQUGhkshB42jVHsaWfRM030sKGYxQ8pX5Tsa
cSagnfBCvAZ9VjDLkXrnEbWRNTz8LSwn9l63z0jmtm5D8+vbpMtgvtWARtuZ4RNn
D8wIWoyT0ytVZnosu5441TEgCejtcKOEFzThvKDYMeMJZ/rqVkAbcznapoC2qUd4
fceNlLfQFvW7xpY1MY8mhlwC0ki4hM9MSDieaXUyefvAU/hoSp8MveVUH5UspYfb
0FpkEhzklW/g0/fuq0DJQIrMn7dajjUvVZIUQtiVQuFHOr6RUbFG5vmXuCbAyx10
PE2K4rnKz+PC8bKab7v9
=YIsn
-----END PGP SIGNATURE-----
Merge tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull "ARM: cleanups of io includes" from Olof Johansson:
"Rob Herring has done a sweeping change cleaning up all of the
mach/io.h includes, moving some of the oft-repeated macros to a common
location and removing a bunch of boiler plate. This is another step
closer to a common zImage for multiple platforms."
Fix up various fairly trivial conflicts (<mach/io.h> removal vs changes
around it, tegra localtimer.o is *still* gone, yadda-yadda).
* tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (29 commits)
ARM: tegra: Include assembler.h in sleep.S to fix build break
ARM: pxa: use common IOMEM definition
ARM: dma-mapping: convert ARCH_HAS_DMA_SET_COHERENT_MASK to kconfig symbol
ARM: __io abuse cleanup
ARM: create a common IOMEM definition
ARM: iop13xx: fix missing declaration of iop13xx_init_early
ARM: fix ioremap/iounmap for !CONFIG_MMU
ARM: kill off __mem_pci
ARM: remove bunch of now unused mach/io.h files
ARM: make mach/io.h include optional
ARM: clps711x: remove unneeded include of mach/io.h
ARM: dove: add explicit include of dove.h to addr-map.c
ARM: at91: add explicit include of hardware.h to uncompressor
ARM: ep93xx: clean-up mach/io.h
ARM: tegra: clean-up mach/io.h
ARM: orion5x: clean-up mach/io.h
ARM: davinci: remove unneeded mach/io.h include
[media] davinci: remove includes of mach/io.h
ARM: OMAP: Remove remaining includes for mach/io.h
ARM: msm: clean-up mach/io.h
...
Several platforms create IOMEM defines for casting to 'void __iomem *',
and other platforms are incorrectly using __io() macro for the same
purpose. This creates a common definition and removes all the platform
specific versions. Rather than try to make linux/io.h and asm/io.h
assembly safe, the assembly version of IOMEM is moved into
asm/assembler.h.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Kevin Hilman <khilman@ti.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Cc: Ryan Mallon <rmallon@gmail.com>
Cc: Eric Miao <eric.y.miao@gmail.com>
Cc: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: David Brown <davidb@codeaurora.org>
Cc: Daniel Walker <dwalker@fifo99.com>
Cc: Bryan Huntsman <bryanh@codeaurora.org>
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Paul Walmsley <paul@pwsan.com>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Cc: Rajeev Kumar <rajeev-dlh.kumar@st.com>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Bootup with lockdep enabled has been broken on v7 since b46c0f7465
("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").
This is because v7_setup (which is called very early during boot) calls
v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
ends up attempting to call into lockdep C code (trace_hardirqs_off())
when we are in no position to execute it (no stack, MMU off).
Fix this by using a notrace variant of save_and_disable_irqs. The code
already uses the notrace variant of restore_irqs.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This macro is used to generate unprivileged accesses (LDRT/STRT) to user
space.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Before we enable the MMU, we must ensure that the TTBR registers contain
sane values. After the MMU has been enabled, we jump to the *virtual*
address of the following function, so we also need to ensure that the
SCTLR write has taken effect.
This patch adds ISB instructions around the SCTLR write to ensure the
visibility of the above.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Declaring strings in assembler source involves a certain amount of
tedious boilerplate code in order to annotate the resulting symbol
correctly.
Encapsulating this boilerplate in a macro should help to avoid some
duplication and the occasional mistake.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
The assembly code in entry-macro-multi.S does not build without
the include asm/assembler.h in the case of CONFIG_SMP=y.
Fixes the rather theoretical SMP build of mach-shmobile/entry-intc.c:
arch/arm/include/asm/entry-macro-multi.S: Assembler messages:
arch/arm/include/asm/entry-macro-multi.S:20: Error: bad instruction `alt_smp(test_for_ipi r0,r6,r5,lr)'
arch/arm/include/asm/entry-macro-multi.S:20: Error: bad instruction `alt_up_b(9997f)'
make[1]: *** [arch/arm/mach-shmobile/entry-intc.o] Error 1
make: *** [arch/arm/mach-shmobile] Error 2
make: *** Waiting for unfinished jobs....
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* __fixup_smp_on_up has been modified with support for the
THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split
into halfwords in case of misalignment, since we can't rely on
unaligned accesses working before turning the MMU on.
No attempt is made to optimise the aligned case, since the
number of fixups is typically small, and it seems best to keep
the code as simple as possible.
* Add a rotate in the fixup_smp code in order to support
CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
* Add an assembly-time sanity-check to ALT_UP() to ensure that
the content really is the right size (4 bytes).
(No check is done for ALT_SMP(). Possibly, this could be fixed
by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
ALT_SMP...SMP_UP_B) into two macros. In the first case,
ALT_SMP needs to expand to >= 4 bytes, not == 4.)
* smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
to macro limitations) has not been modified: the affected
instruction (mov) has no 16-bit encoding, so the correct
instruction size is satisfied in this case.
* A "mode" parameter has been added to smp_dmb:
smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP()
This avoids assembly failures due to use of W() inside smp_dmb,
when assembling pure-ARM code in the vectors page.
There might be a better way to achieve this.
* Kconfig: make SMP_ON_UP depend on
(!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
currently assumes little-endian order.)
Tested using a single generic realview kernel on:
ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
ARM RealView PBX-A9 (SMP)
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 8b592783 added a Thumb-2 variant of usracc which, when it is
called with \rept=2, calls usraccoff once with an offset of 0 and
secondly with a hard-coded offset of 4 in order to avoid incrementing
the pointer again. If \inc != 4 then we will store the data to the wrong
offset from \ptr. Luckily, the only caller that passes \rept=2 to this
function is __clear_user so we haven't been actively corrupting user data.
This patch fixes usracc to pass \inc instead of #4 to usraccoff
when it is called a second time.
Cc: <stable@kernel.org>
Reported-by: Tony Thompson <tony.thompson@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the domain switching functionality via the set_fs and
__switch_to functions on cores that have a TLS register.
Currently, the ioremap and vmalloc areas share the same level 1 page
tables and therefore have the same domain (DOMAIN_KERNEL). When the
kernel domain is modified from Client to Manager (via the __set_fs or in
the __switch_to function), the XN (eXecute Never) bit is overridden and
newer CPUs can speculatively prefetch the ioremap'ed memory.
Linux performs the kernel domain switching to allow user-specific
functions (copy_to/from_user, get/put_user etc.) to access kernel
memory. In order for these functions to work with the kernel domain set
to Client, the patch modifies the LDRT/STRT and related instructions to
the LDR/STR ones.
The user pages access rights are also modified for kernel read-only
access rather than read/write so that the copy-on-write mechanism still
works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
(CPU_32v6K is defined) since writing the TLS value to the high vectors page
isn't possible.
The user addresses passed to the kernel are checked by the access_ok()
function so that they do not point to the kernel space.
Tested-by: Anton Vorontsov <cbouatmailru@gmail.com>
Cc: Tony Lindgren <tony@atomide.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.
Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'
This is caused because:
.section .data
.section .text
.section .text
.previous
does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.
Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Before this patch enabling and disabling irqs in assembler code and by
the hardware wasn't tracked completly.
I had to transpose two instructions in arch/arm/lib/bitops.h because
restore_irqs doesn't preserve the flags with CONFIG_TRACE_IRQFLAGS=y
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Mathieu Desnoyers pointed out that the ARM barriers were lacking:
- cmpxchg, xchg and atomic add return need memory barriers on
architectures which can reorder the relative order in which memory
read/writes can be seen between CPUs, which seems to include recent
ARM architectures. Those barriers are currently missing on ARM.
- test_and_xxx_bit were missing SMP barriers.
So put these barriers in. Provide separate atomic_add/atomic_sub
operations which do not require barriers.
Reported-Reviewed-and-Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>