This is to fix a merge problem with mach-highbank/hotplug.c, which git
silently resolves, but wrongly. This commit contains the correct
resolution.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
s5p_register_gpio_interrupt() returns 0 or positive for success, and
-ve for errors, so just use the standard >= 0 test.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
device_register() returns -ve values for errors, and zero for success.
There's no need to obfuscate the code with IS_ERR_VALUE().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consistently check errors using the usual method used in the kernel
for much of its history. For instance:
int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t)
{
int div;
div = gpmc_calc_divider(t->sync_clk);
if (div < 0)
return div;
static int gpmc_set_async_mode(int cs, struct gpmc_timings *t)
{
...
return gpmc_cs_set_timings(cs, t);
.....
ret = gpmc_set_async_mode(gpmc_onenand_data->cs, &t);
if (IS_ERR_VALUE(ret))
return ret;
So, gpmc_cs_set_timings() thinks any negative return value is an error,
but where we check that in higher levels, only a limited range are
errors...
There is only _one_ use of IS_ERR_VALUE() in arch/arm which is really
appropriate, and that is in arch/arm/include/asm/syscall.h:
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
unsigned long error = regs->ARM_r0;
return IS_ERR_VALUE(error) ? error : 0;
}
because this function really does have to differentiate between error
return values and addresses which look like negative numbers (eg, from
mmap()).
So, here's a patch to remove them from OMAP, except for the above.
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
omap_hwmod_lookup() only returns NULL on error, never an error pointer.
Checking the returned pointer using IS_ERR_OR_NULL() is needless
overhead. Use a simple !ptr check instead.
OMAP devices (oh->od) always have a valid platform device attached (see
omap_device_alloc()) so there's no point validating the platform device
pointer (we will have already oopsed long before if this is not the
case here.)
Lastly, oh->od is only ever NULL or a valid omap device pointer - 'oh'
comes from the statically declared hwmod tables, and the pointer is
only filled in by omap_device_alloc() at a point where the omap device
pointer must be valid.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The full mcpm layer is not likely to be relevant to v6 based
platforms, so a multiplatform kernel won't use that code if booted
on v6 hardware.
This patch modifies the AFLAGS for affected mcpm .S files to
specify armv7-a explicitly for that code.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On resume from CPU power down any trace hooks enabled in cpu_init()
will get called before that function has done set_my_cpu_offset(),
so any use of per-cpu variables by trace hook code will cause bad
things to happen. Prevent this by marking the function notrace.
This fixes lockups/crashes seen when enabling function tracer on TC2
with the not yet mainlined cpuidle driver.
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM processors with LPAE enabled use 3 levels of page tables, with an
entry in the top level (pgd) covering 1GB of virtual space. Because of
the branch relocation limitations on ARM, the loadable modules are
mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
between kernel modules and user space.
If free_pgtables() is called with the default ceiling 0,
free_pgd_range() (and subsequently called functions) also frees the page
table shared between user space and kernel modules (which is normally
handled by the ARM-specific pgd_free() function). This patch changes
defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
is enabled.
Note that the pgd_free() function already checks the presence of the
shared pmd page allocated by pgd_alloc() and frees it, though with
ceiling 0 this wasn't necessary.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@vger.kernel.org> # 3.3+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 93dc688 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum
798181 (TLBI/DSB operations)) introduces calls to smp_processor_id() and
smp_call_function_many() with preemption enabled. This patch disables
preemption and also optimises the smp_processor_id() call in
broadcast_tlb_mm_a15_erratum(). The broadcast_tlb_a15_erratum() function
is changed to use smp_call_function() which disables preemption.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Geoff Levand <geoff@infradead.org>
Reported-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that the cluster power API is in place, we can use it for SMP secondary
bringup and CPU hotplug in a generic fashion.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Instead of requiring the first man to be elected in advance (which
can be suboptimal in some situations), this patch uses a per-
cluster mutex to co-ordinate selection of the first man.
This should also make it more feasible to reuse this code path for
asynchronous cluster resume (as in CPUidle scenarios).
We must ensure that the vlock data doesn't share a cacheline with
anything else, or dirty cache eviction could corrupt it.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
This patch adds a simple low-level voting mutex implementation
to be used to arbitrate during first man selection when no load/store
exclusive instructions are usable.
For want of a better name, these are called "vlocks". (I was
tempted to call them ballot locks, but "block" is way too confusing
an abbreviation...)
There is no function to wait for the lock to be released, and no
vlock_lock() function since we don't need these at the moment.
These could straightforwardly be added if vlocks get used for other
purposes.
For architectural correctness even Strongly-Ordered memory accesses
require barriers in order to guarantee that multiple CPUs have a
coherent view of the ordering of memory accesses. Whether or not
this matters depends on hardware implementation details of the
memory system. Since the purpose of this code is to provide a clean,
generic locking mechanism with no platform-specific dependencies the
barriers should be present to avoid unpleasant surprises on future
platforms.
Note:
* When taking the lock, we don't care about implicit background
memory operations and other signalling which may be pending,
because those are not part of the critical section anyway.
A DMB is sufficient to ensure correctly observed ordering if
the explicit memory accesses in vlock_trylock.
* No barrier is required after checking the election result,
because the result is determined by the store to
VLOCK_OWNER_OFFSET and is already globally observed due to the
barriers in voting_end. This means that global agreement on
the winner is guaranteed, even before the winner is known
locally.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
This provides helper methods to coordinate between CPUs coming down
and CPUs going up, as well as documentation on the used algorithms,
so that cluster teardown and setup
operations are not done for a cluster simultaneously.
For use in the power_down() implementation:
* __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu)
* __mcpm_outbound_enter_critical(unsigned int cluster)
* __mcpm_outbound_leave_critical(unsigned int cluster)
* __mcpm_cpu_down(unsigned int cluster, unsigned int cpu)
The power_up_setup() helper should do platform-specific setup in
preparation for turning the CPU on, such as invalidating local caches
or entering coherency. It must be assembler for now, since it must
run before the MMU can be switched on. It is passed the affinity level
for which initialization should be performed.
Because the mcpm_sync_struct content is looked-up and modified
with the cache enabled or disabled depending on the code path, it is
crucial to always ensure proper cache maintenance to update main memory
right away. The sync_cache_*() helpers are used to that end.
Also, in order to prevent a cached writer from interfering with an
adjacent non-cached writer, we ensure each state variable is located to
a separate cache line.
Thanks to Nicolas Pitre and Achin Gupta for the help with this
patch.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
This is the basic API used to handle the powering up/down of individual
CPUs in a (multi-)cluster system. The platform specific backend
implementation has the responsibility to also handle the cluster level
power as well when the first/last CPU in a cluster is brought up/down.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CPUs in cluster based systems, such as big.LITTLE, have special needs
when entering the kernel due to a hotplug event, or when resuming from
a deep sleep mode.
This is vectorized so multiple CPUs can enter the kernel in parallel
without serialization.
The mcpm prefix stands for "multi cluster power management", however
this is usable on single cluster systems as well. Only the basic
structure is introduced here. This will be extended with later patches.
In order not to complexify things more than they currently have to,
the planned work to make runtime adjusted MPIDR based indexing and
dynamic memory allocation for cluster states is postponed to a later
cycle. The MAX_NR_CLUSTERS and MAX_CPUS_PER_CLUSTER static definitions
should be sufficient for those systems expected to be available in the
near future.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Algorithms used by the MCPM layer rely on state variables which are
accessed while the cache is either active or inactive, depending
on the code path and the active state.
This patch introduces generic cache maintenance helpers to provide the
necessary cache synchronization for such state variables to always hit
main memory in an ordered way.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Dave Martin <dave.martin@linaro.org>
Pull kdump fixes from Peter Anvin:
"The kexec/kdump people have found several problems with the support
for loading over 4 GiB that was introduced in this merge cycle. This
is partly due to a number of design problems inherent in the way the
various pieces of kdump fit together (it is pretty horrifically manual
in many places.)
After a *lot* of iterations this is the patchset that was agreed upon,
but of course it is now very late in the cycle. However, because it
changes both the syntax and semantics of the crashkernel option, it
would be desirable to avoid a stable release with the broken
interfaces."
I'm not happy with the timing, since originally the plan was to release
the final 3.9 tomorrow. But apparently I'm doing an -rc8 instead...
* 'x86-kdump-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
kexec: use Crash kernel for Crash kernel low
x86, kdump: Change crashkernel_high/low= to crashkernel=,high/low
x86, kdump: Retore crashkernel= to allocate under 896M
x86, kdump: Set crashkernel_low automatically
Pull x86 fixes from Peter Anvin:
"Three groups of fixes:
1. Make sure we don't execute the early microcode patching if family
< 6, since it would touch MSRs which don't exist on those
families, causing crashes.
2. The Xen partial emulation of HyperV can be dealt with more
gracefully than just disabling the driver.
3. More EFI variable space magic. In particular, variables hidden
from runtime code need to be taken into account too."
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, microcode: Verify the family before dispatching microcode patching
x86, hyperv: Handle Xen emulation of Hyper-V more gracefully
x86,efi: Implement efi_no_storage_paranoia parameter
efi: Export efi_query_variable_store() for efivars.ko
x86/Kconfig: Make EFI select UCS2_STRING
efi: Distinguish between "remaining space" and actually used space
efi: Pass boot services variable info to runtime code
Move utf16 functions to kernel core and rename
x86,efi: Check max_size only if it is non-zero.
x86, efivars: firmware bug workarounds should be in platform code
Pull ARM fixes from Russell King:
"A set of fixes from various people - Will Deacon gets a prize for
removing code this time around. The biggest fix in this lot is
sorting out the ARM740T mess. The rest are relatively small fixes."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7699/1: sched_clock: Add more notrace to prevent recursion
ARM: 7698/1: perf: fix group validation when using enable_on_exec
ARM: 7697/1: hw_breakpoint: do not use __cpuinitdata for dbg_cpu_pm_nb
ARM: 7696/1: Fix kexec by setting outer_cache.inv_all for Feroceon
ARM: 7694/1: ARM, TCM: initialize TCM in paging_init(), instead of setup_arch()
ARM: 7692/1: iop3xx: move IOP3XX_PERIPHERAL_VIRT_BASE
ARM: modules: don't export cpu_set_pte_ext when !MMU
ARM: mm: remove broken condition check for v4 flushing
ARM: mm: fix numerous hideous errors in proc-arm740.S
ARM: cache: remove ARMv3 support code
ARM: tlbflush: remove ARMv3 support
Pull sparc fixes from David Miller:
1) Fix race in sparc64 TLB shootdowns, we have to synchronize with the
sibling cpus completing if we are passing them a reference via
pointer to a data structure.
2) Fix cleaning of bitmaps in sparc32, from Akinobu Mita.
3) Fix various sparc header mistakes, some of which resulted in
userland build breakage. From Sam Ravnborg.
4) Kill ghost declarations and defines missed when several bits of code
got deleted recently.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
sparc64: Fix race in TLB batch processing.
sparc: use asm-generic version of types.h
bbc_i2c: fix section mismatch warning
sparc: use generic headers
sparc:cleanup unused code in smp_32.h
sparc/iommu: fix typo s/265KB/256KB/
sparc/srmmu: clear trailing edge of bitmap properly
sparc:remove unused declaration smp_boot_cpus()
Matt Fleming (1):
x86, efivars: firmware bug workarounds should be in platform
code
Matthew Garrett (3):
Move utf16 functions to kernel core and rename
efi: Pass boot services variable info to runtime code
efi: Distinguish between "remaining space" and actually used
space
Richard Weinberger (2):
x86,efi: Check max_size only if it is non-zero.
x86,efi: Implement efi_no_storage_paranoia parameter
Sergey Vlasov (2):
x86/Kconfig: Make EFI select UCS2_STRING
efi: Export efi_query_variable_store() for efivars.ko
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
For each CPU vendor that implements CPU microcode patching, there will
be a minimum family for which this is implemented. Verify this
minimum level of support.
This can be done in the dispatch function or early in the application
functions. Doing the latter turned out to be somewhat awkward because
of the ineviable split between the BSP and the AP paths, and rather
than pushing deep into the application functions, do this in
the dispatch function.
Reported-by: "Bryan O'Donoghue" <bryan.odonoghue.lkml@nexus-software.ie>
Suggested-by: Borislav Petkov <bp@alien8.de>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Link: http://lkml.kernel.org/r/1366392183-4149-1-git-send-email-bryan.odonoghue.lkml@nexus-software.ie
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
cyc_to_sched_clock() is called by sched_clock() and cyc_to_ns()
is called by cyc_to_sched_clock(). I suspect that some compilers
inline both of these functions into sched_clock() and so we've
been getting away without having a notrace marking. It seems that
my compiler isn't inlining cyc_to_sched_clock() though, so I'm
hitting a recursion bug when I enable the function graph tracer,
causing my system to crash. Marking these functions notrace fixes
it. Technically cyc_to_ns() doesn't need the notrace because it's
already marked inline, but let's just add it so that if we ever
remove inline from that function it doesn't blow up.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Only one remaining fix for arm-soc platforms at this time, a small
bugfix for cpu hotplug on highbank platforms that has become much
easier to hit as of late. Details in the patch description, but it's
small and well-contained and definitely impacts users of the platform,
so 3.9 seems appropriate.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJRcYGyAAoJEIwa5zzehBx3sYYP/iiongz+96eVHVxBMGMWHg0f
BcTtHw2ef0h4vCDMNUXmPEmVu74hN5VMCutUd2qeKvckyC7gEIrbVnLFwej05RiA
AFwIJ0A2pAjdvsAkRfTNXPpFs7bvZ+8r6cUJd4JGCP7IC8Zi4sj4dET1ZNEsaccD
vMQ+Y8O3dvVu1lEFfGT87huppQgCr/jzc6O9oc3eHDZ242v8tVLS31PpuZe8Qt25
8QvKKsY/UG82/aiz+ijlcddDJz132byNzrOvmY0DkcZ5ZMxbTnGUNFUqFszFkBYu
FrtAyJyQEyUYwY7r6geogfgtj17mQzbq1/4azcApmDQHadBhVbXdDuTW0EPO63QC
sQzf9JgWR66H5hOYeDp2ka1RbBh00k6byvh7T5adzgoJDtbHtJJ8OxW16OR/eoCQ
umCZ2rQxAQCpw11qRnA8LDwnmujr5qXFMuj5NqepUaDCLbWAq2VWeNTicu0LMgCN
RJZ7ifk94o+uCQETd0D2ZrZZtqvrbLtaLAgfa54PiMYecV0rRF44FUVDCIbUBRKq
5ouN76JfvbOQutLj41nA/QBr0ATCAw4mPoxaeaIwie1G7lXvZvMRRkjKLxDhe/qs
+3gZivBhQQuKEe3CYooLx3xoC/bs1VoMsyiRXa8YgdKsZbY0bIsQAUZp8dYjAgoW
hYf3zRgRd69+k0uqWA1f
=j3Hq
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"Only one remaining fix for arm-soc platforms at this time, a small
bugfix for cpu hotplug on highbank platforms that has become much
easier to hit as of late.
Details in the patch description, but it's small and well-contained
and definitely impacts users of the platform, so 3.9 seems
appropriate."
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: highbank: fix cache flush ordering for cpu hotplug
Remove the majority of cache flushing calls from the individual platform
files. This is now handled by the core code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Flush the L1 cache for the CPU which is going down in cpu_die() so
that we don't end up with all platforms doing this. This ensures
that any cache lines we own are pushed out before the cache becomes
inaccessible.
We may end up subsequently creating some dirty cache lines - for
example, with the complete() call, but this update must become
visible to other CPUs before __cpu_die() can proceed. Subsequent
accesses from the platforms cpu_die() function should _not_ matter.
Also place a mb() after the complete() call to ensure that this is
visible to other CPUs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The tegra cpu_disable() function is the same as the generic version
in arch/arm/kernel/smp.c. Therefore, it can be removed.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The L1 data cache flush needs to be after highbank_set_cpu_jump call which
pollutes the cache with the l2x0_lock. This causes other cores to deadlock
waiting for the l2x0_lock. Moving the flush of the entire data cache after
highbank_set_cpu_jump fixes the problem. Use flush_cache_louis instead of
flush_cache_all are that is sufficient to flush only the L1 data cache.
flush_cache_louis did not exist when highbank_cpu_die was originally
written.
With PL310 errata 769419 enabled, a wmb is inserted into idle which takes
the l2x0_lock. This makes the problem much more easily hit and causes
reset to hang.
Reported-by: Paolo Pisati <p.pisati@gmail.com>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
Install the Hyper-V specific interrupt handler only when needed. This would
permit us to get rid of the Xen check. Note that when the vmbus drivers invokes
the call to register its handler, we are sure to be running on Hyper-V.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Link: http://lkml.kernel.org/r/1366299886-6399-1-git-send-email-kys@microsoft.com
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
fixed the following compile error when use avr32 atstk1006_defconfig:
drivers/mtd/nand/atmel_nand.c: In function 'pmecc_err_location':
drivers/mtd/nand/atmel_nand.c:639: error: implicit declaration of function 'writel_relaxed'
which was introduced by commit 1c7b874d33 ("mtd: at91: atmel_nand: add
Programmable Multibit ECC controller support"). The PMECC for nand
flash code uses writel_relaxed(). But in avr32, there is no macro
"writel_relaxed" defined.
This patch add writex_relaxed macro definitions.
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Acked-by: Havard Skinnemoen <havard@skinnemoen.net>
Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In the very unlikely event where a guest would be foolish enough to
*read* from a write-only cache maintainance register, we end up
with preemption disabled, due to a misplaced get_cpu().
Just move the "is_write" test outside of the critical section.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Per hpa, use crashkernel=X,high crashkernel=Y,low instead of
crashkernel_hign=X crashkernel_low=Y. As that could be extensible.
-v2: according to Vivek, change delimiter to ;
-v3: let hign and low only handle simple form and it conforms to
description in kernel-parameters.txt
still keep crashkernel=X override any crashkernel=X,high
crashkernel=Y,low
-v4: update get_last_crashkernel returning and add more strict
checking in parse_crashkernel_simple() found by HATAYAMA.
-v5: Change delimiter back to , according to HPA.
also separate parse_suffix from parse_simper according to vivek.
so we can avoid @pos in that path.
-v6: Tight the checking about crashkernel=X,highblahblah,high
found by HTYAYAMA.
Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1366089828-19692-5-git-send-email-yinghai@kernel.org
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Vivek found old kexec-tools does not work new kernel anymore.
So change back crashkernel= back to old behavoir, and add crashkernel_high=
to let user decide if buffer could be above 4G, and also new kexec-tools will
be needed.
-v2: let crashkernel=X override crashkernel_high=
update description about _high will be ignored by crashkernel=X
-v3: update description about kernel-parameters.txt according to Vivek.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1366089828-19692-4-git-send-email-yinghai@kernel.org
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Chao said that kdump does does work well on his system on 3.8
without extra parameter, even iommu does not work with kdump.
And now have to append crashkernel_low=Y in first kernel to make
kdump work.
We have now modified crashkernel=X to allocate memory beyong 4G (if
available) and do not allocate low range for crashkernel if the user
does not specify that with crashkernel_low=Y. This causes regression
if iommu is not enabled. Without iommu, swiotlb needs to be setup in
first 4G and there is no low memory available to second kernel.
Set crashkernel_low automatically if the user does not specify that.
For system that does support IOMMU with kdump properly, user could
specify crashkernel_low=0 to save that 72M low ram.
-v3: add swiotlb_size() according to Konrad.
-v4: add comments what 8M is for according to hpa.
also update more crashkernel_low= in kernel-parameters.txt
-v5: update changelog according to Vivek.
-v6: Change description about swiotlb referring according to HATAYAMA.
Reported-by: WANG Chao <chaowang@redhat.com>
Tested-by: WANG Chao <chaowang@redhat.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1366089828-19692-2-git-send-email-yinghai@kernel.org
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
pj4b cpus are LPAE capable so enable them on LPAE compilations
Signed-off-by: Lior Amsalem <alior@marvell.com>
Tested-by: Franklin <flin@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In kmap_atomic(), kmap_high_get() is invoked for checking already
mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
we explicitly call kmap_high_get() before kmap_atomic()
when cache_is_vipt(), so kmap_high_get() can be invoked twice.
This is useless operation, so remove one.
v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to
be self-documented
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Events may be created with attr->disabled == 1 and attr->enable_on_exec
== 1, which confuses the group validation code because events with the
PERF_EVENT_STATE_OFF are not considered candidates for scheduling, which
may lead to failure at group scheduling time.
This patch fixes the validation check for ARM, so that events in the
OFF state are still considered when enable_on_exec is true.
Cc: stable@vger.kernel.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Reported-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We must not declare dbg_cpu_pm_nb as __cpuinitdata as we need it after
system initialization for Suspend and CPUIdle.
This was done in commit 9a6eb310ea ("ARM: hw_breakpoint: Debug powerdown
support for self-hosted debug").
Cc: stable@vger.kernel.org
Cc: Dietmar Eggemann <Dietmar.Eggemann@arm.com>
Signed-off-by: Bastian Hecht <hechtb+renesas@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On Feroceon the L2 cache becomes non-coherent with the CPU
when the L1 caches are disabled. Thus the L2 needs to be invalidated
after both L1 caches are disabled.
On kexec before the starting the code for relocation the kernel,
the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon),
but after L2 cache is never invalidated, because inv_all is not set
in cache-feroceon-l2.c.
So kernel relocation and decompression may has (and usually has) errors.
Setting the function enables L2 invalidation and fixes the issue.
Cc: <stable@vger.kernel.org>
Signed-off-by: Illia Ragozin <illia.ragozin@grapecom.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
tcm_init() call iotable_init() and it use early_alloc variants which
do memblock allocation. Directly using memblock allocation after
initializing bootmem should not permitted, because bootmem can't know
where are additinally reserved.
So move tcm_init() to a safe place before initalizing bootmem.
(On the U300)
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit b4cbb197c7 ("vm: add vm_iomap_memory() helper function") added
a helper function wrapper around io_remap_pfn_range(), and every other
architecture defined it in <asm/pgtable.h>.
The s390 choice of <asm/io.h> may make sense, but is not very convenient
for this case, and gratuitous differences like that cause unexpected errors like this:
mm/memory.c: In function 'vm_iomap_memory':
mm/memory.c:2439:2: error: implicit declaration of function 'io_remap_pfn_range' [-Werror=implicit-function-declaration]
Glory be the kbuild test robot who noticed this, bisected it, and
reported it to the guilty parties (ie me).
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Using this parameter one can disable the storage_size/2 check if
he is really sure that the UEFI does sane gc and fulfills the spec.
This parameter is useful if a devices uses more than 50% of the
storage by default.
The Intel DQSW67 desktop board is such a sucker for exmaple.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>