Stepan found:
CPU0 CPUn
_cpu_up()
__cpu_up()
boostrap()
notify_cpu_starting()
set_cpu_online()
while (!cpu_active())
cpu_relax()
<PREEMPT-out>
smp_call_function(.wait=1)
/* we find cpu_online() is true */
arch_send_call_function_ipi_mask()
/* wait-forever-more */
<PREEMPT-in>
local_irq_enable()
cpu_notify(CPU_ONLINE)
sched_cpu_active()
set_cpu_active()
Now the purpose of cpu_active is mostly with bringing down a cpu, where
we mark it !active to avoid the load-balancer from moving tasks to it
while we tear down the cpu. This is required because we only update the
sched_domain tree after we brought the cpu-down. And this is needed so
that some tasks can still run while we bring it down, we just don't want
new tasks to appear.
On cpu-up however the sched_domain tree doesn't yet include the new cpu,
so its invisible to the load-balancer, regardless of the active state.
So instead of setting the active state after we boot the new cpu (and
consequently having to wait for it before enabling interrupts) set the
cpu active before we set it online and avoid the whole mess.
Reported-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1323965362.18942.71.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When a CPU is taken out of reset, either cold booted or hotplugged in,
some of its PMU registers can contain UNKNOWN values.
This patch adds a hotplug notifier to ARM core perf code so that upon
CPU restart the PMU unit is reset and becomes ready to use again.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
xscale2 PMUs indicate overflow not via the PMU control register, but by
a separate overflow FLAG register instead.
This patch fixes the xscale2 PMU code to use this register to detect
to overflow and ensures that we clear any pending overflow when
disabling a counter.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The PMU IRQ handlers in perf assume that if a counter has overflowed
then perf must be responsible. In the paranoid world of crazy hardware,
this could be false, so check that we do have a valid event before
attempting to dereference NULL in the interrupt path.
Cc: <stable@vger.kernel.org>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When disabling a counter on an ARMv7 PMU, we should also clear the
overflow flag in case an overflow occurred whilst stopping the counter.
This prevents a spurious overflow being picked up later and leading to
either false accounting or a NULL dereference.
Cc: <stable@vger.kernel.org>
Reported-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM, the PMU does not stop counting after an overflow and therefore
IRQ latency affects the new counter value read by the kernel. This is
significant for non-sampling runs where it is possible for the new value
to overtake the previous one, causing the delta to be out by up to
max_period events.
Commit a737823d ("ARM: 6835/1: perf: ensure overflows aren't missed due
to IRQ latency") attempted to fix this problem by allowing interrupt
handlers to pass an overflow flag to the event update function, causing
the overflow calculation to assume that the counter passed through zero
when going from prev to new. Unfortunately, this doesn't work when
overflow occurs on the perf_task_tick path because we have the flag
cleared and end up computing a large negative delta.
This patch removes the overflow flag from armpmu_event_update and
instead limits the sample_period to half of the max_period for
non-sampling profiling runs.
Cc: <stable@vger.kernel.org>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Our TLB ops want to check the vma vm_flags to find out whether the
mapping is executable. However, we leave this uninitialized in
ecard.c. Initialize it with an appropriate value.
Reported-by: Al Viro <viro@ftp.linux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP2+: Fix L4_EMU_34XX_BASE error after iomap changes
ARM: OMAP2+: Limit omap_read/write usage to legacy USB drivers
ARM: OMAP: Remove plat/io.h by splitting it into mach/io.h and mach/hardware.h
ARM: OMAP2+: Move most of plat/io.h into local iomap.h
ARM: OMAP1: Move most of plat/io.h into local iomap.h
ARM: OMAP1: Move 16xx GPIO system clock to platform init code
ARM: OMAP: Move omap_init_consistent_dma_size() to local common.h
ARM: OMAP2+: Move SDRC related functions from io.h into local common.h
ARM: OMAP2+: Drop DISPC L3 firewall code
ARM: OMAP2xxx: PM: remove obsolete timer disable code in the suspend path
ARM: OMAP: McSPI: Remove unused flag from struct omap2_mcspi_device_config
(update to latest rmk/for-arm-soc branch)
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Tell the PCI core about host bridge address translation so it can take
care of bus-to-resource conversion for us.
CC: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
The PCI core provides a pci_flags definition (currently __weak), so drop
the arm definition in favor of that.
We EXPORT_SYMBOL(pci_flags) as arm did previously. I'm dubious about
this: no other architecture exports it, and I didn't see any modules in
the tree that reference it.
CC: Rob Herring <rob.herring@calxeda.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
With the removal of disable_fiq on rpc and addition MULTI_IRQ_HANDLER,
entry-macro.S is no longer needed for platforms that select
MULTI_IRQ_HANDLER and the include of it can be conditional.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Only 3 platforms need arch_ret_to_user macro, so add ARCH_HAS_RET_TO_USER
kconfig option and make iop13xx, iop32x and iop33x select it.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Both bugs being fixed were introduced in:
29ef73b7a8
Include linux/audit.h to fix below build errors:
CC arch/arm/kernel/ptrace.o
arch/arm/kernel/ptrace.c: In function 'syscall_trace':
arch/arm/kernel/ptrace.c:919: error: implicit declaration of function 'audit_syscall_exit'
arch/arm/kernel/ptrace.c:921: error: implicit declaration of function 'audit_syscall_entry'
arch/arm/kernel/ptrace.c:921: error: 'AUDIT_ARCH_ARMEB' undeclared (first use in this function)
arch/arm/kernel/ptrace.c:921: error: (Each undeclared identifier is reported only once
arch/arm/kernel/ptrace.c:921: error: for each function it appears in.)
make[1]: *** [arch/arm/kernel/ptrace.o] Error 1
make: *** [arch/arm/kernel] Error 2
This part of the patch is:
Reported-by: Axel Lin <axel.lin@gmail.com>
Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
(They both provided patches to fix it)
This patch also (at the request of the list) fixes the fact that
ARM has both LE and BE versions however the audit code was called as if
it was always BE. If audit userspace were to try to interpret the bits
it got from a LE system it would obviously do so incorrectly. Fix this
by using the right arch flag on the right system.
This part of the patch is:
Reported-by: Russell King - ARM Linux <linux@arm.linux.org.uk>
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM kernel uses undefined instructions to implement
BUG/BUG_ON(). This leads to problems where people don't read one
line above the Oops message and see the "kernel BUG at ..."
message and so they wrongly assume the kernel has hit an
undefined instruction.
Instead of printing:
Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
print
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
This should prevent people from thinking the BUG_ON was an
undefined instruction when it was actually intentional.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Simon Glass <sjg@chromium.org>
Tested-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With an admittedly exotic choice of configuration options
(CC_OPTIMIZE_FOR_SIZE, THUMB2, some other size-minimizing ones)
and compiler, the proc_info table can end up being misaligned,
and the kernel being unbootable (Error: unrecognized/unsupported
processor variant).
Forcing the alignement to 4 bytes in the linker script fixes the
issue.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
.. several days delayed. No reason, I just didn't think of it.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
iQEcBAABAgAGBQJPKF4KAAoJEHm+PkMAQRiGw/YH+wXg2DpZUuHeBK52zMGlJBPc
DzX11/Uan3Y07gM0JbDzuVxwjX4vdxR2bV6r1qsLP8JEUnE8jyFC32DBGi5WAht7
F4KU/Uov2Ds5/wzvY4Iuo01C+JftQHXuy/Sbhck1d0LI0yjLejRaw+zuJv0x2/eS
7YqV+KTGE1lDuJs/Gyq1Vqr1g9831AuS1tv/g3gaqBuN6TcPBFCocaVxzwrUc+y6
94h26XbbOhQRIz38oqUkiqAGnvYS61ocyBcEiRHf0dXkNSDIINqlgukvd7YTXouA
jj/w/DWpMRcQuYAgqkrurr9+yWC9hVQcsvvQ5sAQnIPcxoR868sg1pO8Oheq+1g=
=kUzV
-----END PGP SIGNATURE-----
Merge tag 'v3.3-rc2' into depends/rmk/for-armsoc
There were conflicts between fixes going in after 3.3-rc1 and
Russell's stable arm-soc base branch. Resolving it in the dependency
branch so that each topic branch shares the same resolution.
Conflicts:
arch/arm/mach-at91/at91cap9.c
arch/arm/mach-at91/at91sam9g45.c
__kuser_cmpxchg64 has a return path using bx lr to get back to the caller.
This is actually ok since the code in question is predicated on
CONFIG_CPU_32v6K, but for the sake of consistency using the usr_ret
macro is probably better.
Acked-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All sched_clock() providers have been converted to the sched_clock
framework, which also provides a jiffy based implementation for
the platforms that do not provide a counter.
It is now possible to make the sched_clock framework mandatory,
effectively preventing new platforms to add new sched_clock()
functions, which would be detrimental to the single zImage work.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Commit 89d6c0b5 ("perf, arch: Add generic NODE cache events") added
empty NODE event definitions for the ARM PMU implementations. This was
merged along with Cortex-A5 and Cortex-A15 PMU support, so they missed
out on the original patch.
This patch adds the empty definitions to Cortex-A5 and Cortex-A15.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If we are context switched whilst copying into a thread's
vfp_hard_struct then the partial copy may be corrupted by the VFP
context switching code (see "ARM: vfp: flush thread hwstate before
restoring context from sigframe").
This patch updates the ptrace VFP set code so that the thread state is
flushed before the copy, therefore disabling VFP and preventing
corruption from occurring.
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In a preemptible kernel, vfp_set() can be preempted, causing the
hardware VFP context to be switched while the thread vfp state is
being read and modified. This leads to a race condition which can
cause the thread vfp state to become corrupted if lazy VFP context
save occurs due to preemption in between the time thread->vfpstate
is read and the time the modified state is written back.
This may occur if preemption occurs during the execution of a
ptrace() call which modifies the VFP register state of a thread.
Such instances should be very rare in most realistic scenarios --
none has been reported, so far as I am aware. Only uniprocessor
systems should be affected, since VFP context save is not currently
lazy in SMP kernels.
The problem was introduced by my earlier patch migrating to use
regsets to implement ptrace.
This patch does a vfp_sync_hwstate() before reading
thread->vfpstate, to make sure that the thread's VFP state is not
live in the hardware registers while the registers are modified.
Thanks to Will Deacon for spotting this.
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Following execution of a signal handler, we currently restore the VFP
context from the ucontext in the signal frame. This involves copying
from the user stack into the current thread's vfp_hard_struct and then
flushing the new data out to the hardware registers.
This is problematic when using a preemptible kernel because we could be
context switched whilst updating the vfp_hard_struct. If the current
thread has made use of VFP since the last context switch, the VFP
notifier will copy from the hardware registers into the vfp_hard_struct,
overwriting any data that had been partially copied by the signal code.
Disabling preemption across copy_from_user calls is a terrible idea, so
instead we move the VFP thread flush *before* we update the
vfp_hard_struct. Since the flushing is performed lazily, this has the
effect of disabling VFP and clearing the CPU's VFP state pointer,
therefore preventing the thread from being updated with stale data on
the next context switch.
Cc: stable <stable@vger.kernel.org>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The dynamic ftrace ops startup test currently fails on Thumb-2 kernels:
Testing tracer function: PASSED
Testing dynamic ftrace: PASSED
Testing dynamic ftrace ops #1: (0 0 0 0 0) FAILED!
This is because while the addresses in the mcount records do not have
the zero bit set, the IP reported by the mcount call does have it set
(because it is copied from the LR). This mismatch causes the ops
filtering in ftrace_ops_list_func() to not call the relevant tracers.
Fix this by clearing the zero bit before adjusting the LR for the mcount
instruction size. Also, combine the mov+sub into a single sub
instruction.
Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Initialize the contents of the vectors page immediately after we
allocate the page, but before we map it. This avoids any possible
aliases with other mappings which may need to be flushed after the
page has been mapped irrespective of the cache type.
We follow this later with a flush_cache_all() after all static memory
mappings have been initialized, which ensures that this is safe from
any cache effects.
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On secondary CPUs, the Timer Control Register is not reset
to a sane value before the timer is registered, and the TRM
doesn't seem to indicate any reset value either. In some cases,
the kernel will take an interrupt too early, depending on what
junk was present in the registers at reset time.
The fix is to set the Timer Control Register to 0 before
registering the clock_event_device and enabling the interrupt.
Problem seen on VE (Cortex A5) and Tegra.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It turns out that the logical CPU mapping is useful even when !CONFIG_SMP
for manipulation of devices like interrupt and power controllers when
running a UP kernel on a CPU other than 0. This can happen when kexecing
a UP image from an SMP kernel.
In the future, multi-cluster systems running AMP configurations will
require something similar for mapping cluster IDs, so it makes sense to
decouple this logic in preparation for this support.
Acked-by: Yang Bai <hamo.by@gmail.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reported-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The exception fixup table is currently aligned to a 32-byte boundary.
Whilst this won't cause any problems, the exception_table_entry
structures contain only a pair of unsigned longs, so 4-byte alignment
is all that is required. If the table was walked from start to end,
cacheline alignment may bring some performance benefits, but since a
binary search is used, the access pattern is random and will not benefit
from a stricter alignment.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The linker script assumes a cacheline size of 32 bytes when aligning
the .data..cacheline_aligned and .data..percpu sections.
This patch updates the script to use L1_CACHE_BYTES, which should be set
to 64 on platforms that require it.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that all implementations of arch_idle() are equivalent to cpu_do_idle()
we can just use the later directly and stop including mach/system.h.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-and-tested-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Let's factor out the need_resched() check instead of having it duplicated
in every pm_idle implementations to avoid inconsistencies (omap2_pm_idle
is missing it already).
The forceful re-enablement of IRQs after pm_idle has returned can go.
The warning certainly doesn't trigger for existing users.
To get rid of the pm_idle calling convention oddity, let's introduce
arm_pm_idle() allowing for the local_irq_enable() to be factored out
from SOC specific implementations. The default pm_idle function becomes
a wrapper for arm_pm_idle and it takes care of enabling IRQs closer to
where they are initially disabled.
And finally move the comment explaining the reason for that turning off
of IRQs to a more proper location.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-and-tested-by: Jamie Iles <jamie@jamieiles.com>
Fix the following build warning:
CC arch/arm/kernel/setup.o
In file included from arch/arm/kernel/setup.c:39:
arch/arm/include/asm/elf.h:102:1: warning: "vmcore_elf64_check_arch" redefined
In file included from arch/arm/kernel/setup.c:24:
include/linux/crash_dump.h:30:1: warning: this is the location of the previous definition
Since commit 93a72052 (crash_dump: export is_kdump_kernel to modules, consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn)
the inclusion of <linux/crash_dump.h> is no longer needed.
Remove the inclusion of <linux/crash_dump.h> and the build warning is fixed.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All other ports use "Kernel code" to identify the Kernel text segment
in /proc/iomem. Change the ARM resources to do the same.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We can stall RCU processing on SMP platforms if a CPU sits in its idle
loop for a long time. This happens because we don't call irq_enter()
and irq_exit() around generic_smp_call_function_interrupt() and
friends. Add the necessary calls, and remove the one from within
ipi_timer(), so that they're all in a common place.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit: (29 commits)
audit: no leading space in audit_log_d_path prefix
audit: treat s_id as an untrusted string
audit: fix signedness bug in audit_log_execve_info()
audit: comparison on interprocess fields
audit: implement all object interfield comparisons
audit: allow interfield comparison between gid and ogid
audit: complex interfield comparison helper
audit: allow interfield comparison in audit rules
Kernel: Audit Support For The ARM Platform
audit: do not call audit_getname on error
audit: only allow tasks to set their loginuid if it is -1
audit: remove task argument to audit_set_loginuid
audit: allow audit matching on inode gid
audit: allow matching on obj_uid
audit: remove audit_finish_fork as it can't be called
audit: reject entry,always rules
audit: inline audit_free to simplify the look of generic code
audit: drop audit_set_macxattr as it doesn't do anything
audit: inline checks for not needing to collect aux records
audit: drop some potentially inadvisable likely notations
...
Use evil merge to fix up grammar mistakes in Kconfig file.
Bad speling and horrible grammar (and copious swearing) is to be
expected, but let's keep it to commit messages and comments, rather than
expose it to users in config help texts or printouts.
This patch provides functionality to audit system call events on the
ARM platform. The implementation was based off the structure of the
MIPS platform and information in this
(http://lists.fedoraproject.org/pipermail/arm/2009-October/000382.html)
mailing list thread. The required audit_syscall_exit and
audit_syscall_entry checks were added to ptrace using the standard
registers for system call values (r0 through r3). A thread information
flag was added for auditing (TIF_SYSCALL_AUDIT) and a meta-flag was
added (_TIF_SYSCALL_WORK) to simplify modifications to the syscall
entry/exit. Now, if either the TRACE flag is set or the AUDIT flag is
set, the syscall_trace function will be executed. The prober changes
were made to Kconfig to allow CONFIG_AUDITSYSCALL to be enabled.
Due to platform availability limitations, this patch was only tested
on the Android platform running the modified "android-goldfish-2.6.29"
kernel. A test compile was performed using Code Sourcery's
cross-compilation toolset and the current linux-3.0 stable kernel. The
changes compile without error. I'm hoping, due to the simple modifications,
the patch is "obviously correct".
Signed-off-by: Nathaniel Husted <nhusted@gmail.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
This patch adds a check for the presence of the LPAE feature during the
CPU initialisation. If not present, it reports an error when
CONFIG_DEBUG_LL is enabled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci: (80 commits)
x86/PCI: Expand the x86_msi_ops to have a restore MSIs.
PCI: Increase resource array mask bit size in pcim_iomap_regions()
PCI: DEVICE_COUNT_RESOURCE should be equal to PCI_NUM_RESOURCES
PCI: pci_ids: add device ids for STA2X11 device (aka ConneXT)
PNP: work around Dell 1536/1546 BIOS MMCONFIG bug that breaks USB
x86/PCI: amd: factor out MMCONFIG discovery
PCI: Enable ATS at the device state restore
PCI: msi: fix imbalanced refcount of msi irq sysfs objects
PCI: kconfig: English typo in pci/pcie/Kconfig
PCI/PM/Runtime: make PCI traces quieter
PCI: remove pci_create_bus()
xtensa/PCI: convert to pci_scan_root_bus() for correct root bus resources
x86/PCI: convert to pci_create_root_bus() and pci_scan_root_bus()
x86/PCI: use pci_scan_bus() instead of pci_scan_bus_parented()
x86/PCI: read Broadcom CNB20LE host bridge info before PCI scan
sparc32, leon/PCI: convert to pci_scan_root_bus() for correct root bus resources
sparc/PCI: convert to pci_create_root_bus()
sh/PCI: convert to pci_scan_root_bus() for correct root bus resources
powerpc/PCI: convert to pci_create_root_bus()
powerpc/PCI: split PHB part out of pcibios_map_io_space()
...
Fix up conflicts in drivers/pci/msi.c and include/linux/pci_regs.h due
to the same patches being applied in other branches.
* 'driver-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (73 commits)
arm: fix up some samsung merge sysdev conversion problems
firmware: Fix an oops on reading fw_priv->fw in sysfs loading file
Drivers:hv: Fix a bug in vmbus_driver_unregister()
driver core: remove __must_check from device_create_file
debugfs: add missing #ifdef HAS_IOMEM
arm: time.h: remove device.h #include
driver-core: remove sysdev.h usage.
clockevents: remove sysdev.h
arm: convert sysdev_class to a regular subsystem
arm: leds: convert sysdev_class to a regular subsystem
kobject: remove kset_find_obj_hinted()
m86k: gpio - convert sysdev_class to a regular subsystem
mips: txx9_sram - convert sysdev_class to a regular subsystem
mips: 7segled - convert sysdev_class to a regular subsystem
sh: dma - convert sysdev_class to a regular subsystem
sh: intc - convert sysdev_class to a regular subsystem
power: suspend - convert sysdev_class to a regular subsystem
power: qe_ic - convert sysdev_class to a regular subsystem
power: cmm - convert sysdev_class to a regular subsystem
s390: time - convert sysdev_class to a regular subsystem
...
Fix up conflicts with 'struct sysdev' removal from various platform
drivers that got changed:
- arch/arm/mach-exynos/cpu.c
- arch/arm/mach-exynos/irq-eint.c
- arch/arm/mach-s3c64xx/common.c
- arch/arm/mach-s3c64xx/cpu.c
- arch/arm/mach-s5p64x0/cpu.c
- arch/arm/mach-s5pv210/common.c
- arch/arm/plat-samsung/include/plat/cpu.h
- arch/powerpc/kernel/sysfs.c
and fix up cpu_is_hotpluggable() as per Greg in include/linux/cpu.h
* 'for-linus' of git://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (207 commits)
ARM: 7267/1: Remove BUILD_BUG_ON from asm/bug.h
ARM: 7269/1: mach-sa1100: fix sched_clock breakage
ARM: 7198/1: arm/imx6: add restart support for imx6q
ARM: restart: remove the now empty arch_reset()
ARM: restart: remove comments about adding code to arch_reset()
ARM: restart: lpc32xx & u300: remove unnecessary printk
ARM: restart: plat-samsung: remove plat/reset.h and s5p_reset_hook
ARM: restart: w90x900: use new restart hook
ARM: restart: Versatile Express: use new restart hook
ARM: restart: versatile: use new restart hook
ARM: restart: u300: use new restart hook
ARM: restart: tegra: use new restart hook
ARM: restart: spear: use new restart hook
ARM: restart: shark: use new restart hook
ARM: restart: sa1100: use new restart hook
ARM: 7252/1: restart: S5PV210: use new restart hook
ARM: 7251/1: restart: S5PC100: use new restart hook
ARM: 7250/1: restart: S5P64X0: use new restart hook
ARM: 7266/1: restart: S3C64XX: use new restart hook
ARM: 7265/1: restart: S3C24XX: use new restart hook
...
Fix up trivial conflict in arch/arm/mm/init.c due to removal of
memblock_init() clashing with the movement of the sorting of the meminfo
array.
Convert from pci_scan_bus() to pci_scan_root_bus() and remove root bus
resource fixups. This fixes the problem of "early" and "header" quirks
seeing incorrect root bus resources.
CC: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
This patch converts ARM's architecture-specific inlined
'pcibios_set_master()' routine to a non-inlined function. This will
allow follow on patches to create a generic 'pcibios_set_master()'
function using the '__weak' attribute which can be used by all
architectures as a default which, if necessary, can then be over-
ridden by architecture-specific code.
Converting 'pci_bios_set_master()' to a non-inlined function will allow
ARM's 'pcibios_set_master()' implementation to remain architecture-
specific after the generic version is introduced and thus, not change
current behavior.
Note that ARM also has a non-inlined 'pcibios_set_master()' that is
used if CONFIG_PCI_HOST_ITE8152 is defined. This patch does not
change any behavior here either.
No functional change.
Signed-off-by: Myron Stowe <myron.stowe@redhat.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
This resolves the conflict in the arch/arm/mach-s3c64xx/s3c6400.c file,
and it fixes the build error in the arch/x86/kernel/microcode_core.c
file, that the merge did not catch.
The microcode_core.c patch was provided by Stephen Rothwell
<sfr@canb.auug.org.au> who was invaluable in the merge issues involved
with the large sysdev removal process in the driver-core tree.
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (64 commits)
cpu: Export cpu_up()
rcu: Apply ACCESS_ONCE() to rcu_boost() return value
Revert "rcu: Permit rt_mutex_unlock() with irqs disabled"
docs: Additional LWN links to RCU API
rcu: Augment rcu_batch_end tracing for idle and callback state
rcu: Add rcutorture tests for srcu_read_lock_raw()
rcu: Make rcutorture test for hotpluggability before offlining CPUs
driver-core/cpu: Expose hotpluggability to the rest of the kernel
rcu: Remove redundant rcu_cpu_stall_suppress declaration
rcu: Adaptive dyntick-idle preparation
rcu: Keep invoking callbacks if CPU otherwise idle
rcu: Irq nesting is always 0 on rcu_enter_idle_common
rcu: Don't check irq nesting from rcu idle entry/exit
rcu: Permit dyntick-idle with callbacks pending
rcu: Document same-context read-side constraints
rcu: Identify dyntick-idle CPUs on first force_quiescent_state() pass
rcu: Remove dynticks false positives and RCU failures
rcu: Reduce latency of rcu_prepare_for_idle()
rcu: Eliminate RCU_FAST_NO_HZ grace-period hang
rcu: Avoid needlessly IPIing CPUs at GP end
...
Remove the now empty arch_reset() from all the mach/system.h includes,
and remove its callsite. Remove arm_machine_restart() as this function
no longer does anything useful.
For samsung platforms, remove the include of mach/system-reset.h and
plat/system-reset.h from their respective mach/system.h headers as these
just define their arch_reset functions. As a result, the s3c2410 and
plat-samsung system-reset.h files are no longer referenced, so remove
these files entirely.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This break-out from Colin Cross' cpufreq-aware TWD patch
will handle the case when our localtimer's clock changes with
the cpu clock. A cpufreq transtion notifier will be registered
only if the platform has supplied a specified clock to the TWD.
After a cpufreq transition, update the clockevent's frequency
by fetching the new clock rate from the clock framework and
reprogram the next clock event.
The necessary changes in the clockevents framework was done by
Thomas Gleixner in kernel v3.0.
ChangeLog v1->v2:
- Replace IS_ERR_OR_NULL() with IS_ERR() in twd_clk check.
- Update code to use the already existing per-cpu array of TWD
clockevents instead of adding cruft.
[Broke out, ifdef:ed CPUfreq stuff for non-cpufreq configs]
[Rebased to newer TWD base with per-CPU clock array]
Signed-off-by: Colin Cross <ccross@android.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This break-out from Colin Cross' cpufreq-aware TWD patch will
optionally retrieve the clock rate of the TWD from an external
clock. A variant of this patch has been proposed by Rob Herring
as well.
The basic idea is to avoid recalibrating the rate of the clock
at boot if the platform already know what rate the clock to the
TWD block has.
ChangeLog v1->v2: added clk_[prepare|unprepare] calls.
[Broke out of larger SMP TWD patch]
Signed-off-by: Colin Cross <ccross@android.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This break-out from Colin Cross' cpufreq-aware TWD patch will
just modernize the clock event registration code to use
clockevents_config_and_register().
[Broke out of larger SMP TWD patch]
Signed-off-by: Colin Cross <ccross@android.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
After all sysdev classes are ported to regular driver core entities, the
sysdev implementation will be entirely removed from the kernel.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
sched_clock() is yet another blocker on the road to the single
image. This patch implements an idea by Russell King:
http://www.spinics.net/lists/linux-omap/msg49561.html
Instead of asking the platform to implement both sched_clock()
itself and the rollover callback, simply register a read()
function, and let the ARM code care about sched_clock() itself,
the conversion to ns and the rollover. sched_clock() uses
this read() function as an indirection to the platform code.
If the platform doesn't provide a read(), the code falls back
to the jiffy counter (just like the default sched_clock).
This allow some simplifications and possibly some footprint gain
when multiple platforms are compiled in. Among the drawbacks,
the removal of the *_fixed_sched_clock optimization which could
negatively impact some platforms (sa1100, tegra, versatile
and omap).
Tested on 11MPCore, OMAP4 and Tegra.
Cc: Imre Kaloz <kaloz@openwrt.org>
Cc: Eric Miao <eric.y.miao@gmail.com>
Cc: Colin Cross <ccross@android.com>
Cc: Erik Gilling <konkers@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: Alessandro Rubini <rubini@unipv.it>
Cc: STEricsson <STEricsson_nomadik_linux@list.st.com>
Cc: Lennert Buytenhek <kernel@wantstofly.org>
Cc: Ben Dooks <ben-linux@fluff.org>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Krzysztof Halasa <khc@pm.waw.pl>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The bisection implemented in unwind_find_origin() stopped to early. If
there is only a single entry left to check the original code just took
the end point as origin which might be wrong.
This was introduced in commit de66a97901 ("ARM: 7187/1: fix unwinding
for XIP kernels").
Reported-and-tested-by: Nick Bowler <nbowler@elliptictech.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch changes the kprobes implementation to use the generic ARM
instruction set condition code checks, rather than a dedicated
implementation.
Signed-off-by: Leif Lindholm <leif.lindholm@arm.com>
Acked-by: Jon Medhurst <tixy@yxit.co.uk>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch fixes two separate issues with the SWP emulation handler:
1: Certain processors implementing ARMv7-A can (legally) take an
undef exception even when the condition code would have meant that
the instruction should not have been executed.
2: Opcodes with all flags set (condition code = 0xf) have been reused
in recent, and not-so-recent, versions of the ARM architecture to
implement unconditional extensions to the instruction set. The
existing code would still have processed any undefs triggered by
executing an opcode with such a value.
This patch uses the new generic ARM instruction set condition code
checks to implement proper handling of these situations.
Signed-off-by: Leif Lindholm <leif.lindholm@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch breaks the ARM condition checking code out of nwfpe/fpopcode.{ch}
into a standalone file for opcode operations. It also modifies the code
somewhat for coding style adherence, and adds some temporary variables for
increased readability.
Signed-off-by: Leif Lindholm <leif.lindholm@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Integrator AP/CP can have a varying set of core modules, some
(like ARM920T) are so old that trying to read the TCM status register
with CP15 will make them hang. So we need to make sure that we are
running on v5 or later in order to be able to activate this for
the Integrator. (The Integrator with CM926EJ-S has 32+32 kb of TCM
memory.)
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that there is a common way to reset the machine, let's use it
instead of reinventing the wheel in the kexec backend.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Sending IPI_CPU_STOP to a CPU causes it to execute a busy cpu_relax
loop forever. This makes it impossible to kexec successfully on an SMP
system since the secondary CPUs do not reset.
This patch adds a callback to platform_cpu_kill, defined when
CONFIG_HOTPLUG_CPU=y, from the ipi_cpu_stop handling code. This function
currently just returns 1 on all platforms that define it but allows them
to do something more sophisticated in the future.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tools such as kexec and CPU hotplug require a way to reset the processor
and branch to some code in physical space. This requires various bits of
jiggery pokery with the caches and MMU which, when it goes wrong, tends
to lock up the system.
This patch fleshes out the soft_restart implementation so that it
branches to the reset code using the identity mapping. This requires us
to change to a temporary stack, held within the kernel image as a static
array, to avoid conflicting with the new view of memory.
Signed-off-by: Will Deacon <will.deacon@arm.com>
arm_dma_zone_size is used by arm_bootmem_free() which is called by
paging_init(). Thus it needs to be set before calling it.
Signed-off-by: Arnaud Patard <arnaud.patard@rtp-net.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.
Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.
To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().
If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().
Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:
- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:
- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.
There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.
Split this function into:
- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.
- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).
To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().
This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
24aa07882b (memblock, x86: Replace memblock_x86_reserve/free_range()
with generic ones) removed arch/x86/include/asm/memblock.h and dropped
its inclusion from include/linux/memblock.h which breaks other
architectures which depended on the generic memblock.h pulling in the
arch specific one.
However, the proper fix isn't adding back the asm inclusion. memblock
doesn't have any arch dependent part and doesn't need arch specific
header file and asm/memblock.h files are either practically empty or
contain mostly unrelated arch specific stuff.
* In microblaze, sh, powerpc, sparc and openrisc, asm/memblock.h is
either empty or just contains unused MEMBLOCK_DBG() macro. Remove
them.
* In arm and unicore32, asm/memblock.h contains arch specific stuff.
Include it directly from its users. It might be a good idea to
rename the header file to avoid confusion.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
The DFSR and IFSR register format is different when LPAE is enabled. In
addition, DFSR and IFSR have similar definitions for the fault type.
This modifies the fault code to correctly handle the new format.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the MMU initialisation for the LPAE page table format.
The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
proc-v7-3level.S file contains the TTB initialisation, context switch
and PTE setting code with the LPAE. The TTBRx split is based on the
PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
(supersections) and a few other memory types in mmu.c are conditionally
compiled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Before we enable the MMU, we must ensure that the TTBR registers contain
sane values. After the MMU has been enabled, we jump to the *virtual*
address of the following function, so we also need to ensure that the
SCTLR write has taken effect.
This patch adds ISB instructions around the SCTLR write to ensure the
visibility of the above.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ARM SMP booting code allocates a temporary set of page tables
containing an identity mapping of the kernel image and provides this
to secondary CPUs for initial booting.
In reality, we only need to include the __turn_mmu_on function in the
identity mapping since the rest of the kernel is executing from virtual
addresses after this point.
This patch adds __turn_mmu_on to the .idmap.text section, allowing the
SMP booting code to use the idmap_pgd directly and not have to populate
its own set of page table.
As a result of this patch, we can make the identity_mapping_add function
static (since it is only used within mm/idmap.c) and also remove the
identity_mapping_del function. The identity map population is moved to
an early initcall so that it is setup in time for secondary CPU bringup.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
__create_page_tables identity maps the region of memory from
__enable_mmu to the end of __turn_mmu_on.
In preparation for including __turn_mmu_on in the .idmap.text section,
this patch modifies the identity mapping so that it only includes the
__turn_mmu_on code.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM CPU suspend code requires cpu_resume_mmu to be identity mapped
in order to re-enable the MMU when coming out of suspend. Currently,
this is accomplished by maintaining a suspend_pgd with the relevant
mapping put in place at init time.
This patch replaces the use of suspend_pgd with the new idmap_pgd.
cpu_resume_mmu is placed in the .idmap.text section so that it is
included in the identity map.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Dave Martin <dave.martin@linaro.org>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When disabling and re-enabling the MMU, it is necessary to take out an
identity mapping for the code that manipulates the SCTLR in order to
avoid it disappearing from under our feet. This is useful when soft
rebooting and returning from CPU suspend.
This patch allocates a set of page tables during boot and populates them
with an identity mapping for the .idmap.text section. This means that
users of the identity map do not need to manage their own pgd and can
instead annotate their functions with __idmap or, in the case of assembly
code, place them in the correct section.
Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In the unlikely case that a platform registers a PMU platform_device
when running on a CPU that is unsupported by perf, we will encounter a
NULL dereference when trying to assign the platform_device to the
cpu_pmu structure.
This patch checks that the CPU is supported by perf before assigning
the platform_device.
Reported-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The linker places the unwind tables in readonly sections. So when using
an XIP kernel these are located in ROM and cannot be modified.
For that reason the current approach to convert the relative offsets in
the unwind index to absolute addresses early in the boot process doesn't
work with XIP.
The offsets in the unwind index section are signed 31 bit numbers and
the structs are sorted by this offset. So it first has offsets between
0x40000000 and 0x7fffffff (i.e. the negative offsets) and then offsets
between 0x00000000 and 0x3fffffff. When seperating these two blocks the
numbers are sorted even when interpreting the offsets as unsigned longs.
So determine the first non-negative entry once and track that using the
new origin pointer. The actual bisection can then use a plain unsigned
long comparison. The only thing that makes the new bisection more
complicated is that the offsets are relative to their position in the
index section, so the key to search needs to be adapted accordingly in
each step.
Moreover several consts are added to catch future writes and rename the
member "addr" of struct unwind_idx to "addr_offset" to better match the
new semantic. (This has the additional benefit of breaking eventual
users at compile time to make them aware of the change.)
In my tests the new algorithm was a tad faster than the original and has
the additional upside of not needing the initial conversion and so saves
some boot time and it's possible to unwind even earlier.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf: Fix loss of notification with multi-event
perf, x86: Force IBS LVT offset assignment for family 10h
perf, x86: Disable PEBS on SandyBridge chips
trace_events_filter: Use rcu_assign_pointer() when setting ftrace_event_call->filter
perf session: Fix crash with invalid CPU list
perf python: Fix undefined symbol problem
perf/x86: Enable raw event access to Intel offcore events
perf: Don't use -ENOSPC for out of PMU resources
perf: Do not set task_ctx pointer in cpuctx if there are no events in the context
perf/x86: Fix PEBS instruction unwind
oprofile, x86: Fix crash when unloading module (nmi timer mode)
oprofile: Fix crash when unloading module (hr timer mode)
This patch introduces .enable_irq and .disable_irq into
struct arm_pmu_platdata, so platform specific irq enablement
can be handled after request_irq, and platform specific irq
disablement can be handled before free_irq.
This patch is for support of pmu irq routed from CTI on omap4.
Acked-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
armpmu_get_max_events is only called from perf_num_counters, so we can
inline it there. It existed as a separate entity as a hangover from
the original perf-based oprofile implementation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 8f622422 ("perf events: Add generic front-end and back-end
stalled cycle event definitions") added two new ABI events for counting
stalled cycles.
This patch adds support for these new events to the ARM perf
implementation.
Cc: Jamie Iles <jamie@jamieiles.com>
Cc: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch updates the ARMv7 perf event numbers so that:
(1) A consistent naming scheme is used between different CPUs.
(2) Only events actually used by Linux are described.
(3) Where possible, architected events are used in preference to
CPU-specific events.
This results in the removal of a load of unused, hardcoded data and
makes it more clear as to which events are supported on each PMU.
Cc: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
kernel/sched.c:7354:2: warning: initialization from incompatible pointer type
Align cpu_coregroup_mask prototype interface with sched_domain_mask_f typedef
use int cpu instead of unsigned int cpu
Cc: <stable@vger.kernel.org>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The SWP instruction is deprecated on ARMv6 and with ARMv7 it will be
UNDEFINED when CONFIG_SWP_EMULATE is selected. In this case, probing a
SWP instruction will cause an oops when the kprobes emulation code
executes an undefined instruction.
As the SWP instruction should be rare or non-existent in kernels for
ARMv6 and later, we can simply avoid these problems by not allowing
probing of these.
Reported-by: Leif Lindholm <leif.lindholm@arm.com>
Tested-by: Leif Lindholm <leif.lindholm@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is a kprobes testcase for the instruction "strd r2, [r3], r4".
This has unpredictable behaviour as it uses r3 for register writeback
addressing and also stores it to memory.
On a cortex A9, this testcase would fail because the instruction writes
the updated value of r3 to memory, whereas the kprobes emulation code
writes the original value.
Fix this by changing testcase to used r5 instead of r3.
Reported-by: Leif Lindholm <leif.lindholm@arm.com>
Tested-by: Leif Lindholm <leif.lindholm@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When compiling kprobes-test-thumb.c an error like below may occur:
/tmp/ccKcuJcG.s:19179: Error: offset out of range
This is caused by the compiler underestimating the size of the inline
assembler instructions containing ".space 0x1000" and failing to spill
the literal pool in time to prevent the generation of PC relative load
instruction with invalid offsets.
The fix implemented by this patch is to replace a single large .space
directive by a number of 4 byte .space's. This requires splitting the
macros which generate test cases for branch instructions into two forms:
one with, and one without support for inserting extra code between
branch and target.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jon Medhurst <jon.medhurst@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix compilation failure, when Thumb support is not enabled:
arch/arm/kernel/entry-armv.S: Assembler messages:
arch/arm/kernel/entry-armv.S:501: Error: backward ref to unknown label "2:"
arch/arm/kernel/entry-armv.S:502: Error: backward ref to unknown label "3:"
make[2]: *** [arch/arm/kernel/entry-armv.o] Error 1
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Attempting to use a hardware counter on a platform with a supported PMU
but where the platform_device (defining the interrupts) has not been
registered results in a NULL pointer dereference.
This patch fixes the problem by checking that we actually have a platform
device registered before attempting to grab the interrupts.
Reported-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conflicts:
arch/arm/mach-omap2/board-4430sdp.c
arch/arm/mach-omap2/board-omap4panda.c
arch/arm/mach-omap2/include/mach/omap4-common.h
arch/arm/plat-omap/include/plat/irqs.h
The changes to omap4-common.h were moved to arch/arm/mach-omap2/common.h
and the other trivial conflicts resolved. The now empty ifdef in irqs.h
was also eliminated.
This patch implements a workaround for PL310 erratum 769419. On
revisions of the PL310 prior to r3p2, the Store Buffer does not
automatically drain. This can cause normal, non-cacheable writes to be
retained when the memory system is idle, leading to suboptimal I/O
performance for drivers using coherent DMA.
This patch adds an optional wmb() call to the cpu_idle loop. On systems
with an outer cache, this causes an explicit flush of the store buffer.
Cc: stable@vger.kernel.org
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We only need to set the system up for a soft-restart if we're going to
be doing a soft-restart. Provide a new function (soft_restart()) which
does the setup and final call for this, and make platforms use it.
Eliminate the call to setup_restart() from the default handler.
This means that platforms arch_reset() function is no longer called with
the page tables prepared for a soft-restart, and caches will still be
enabled.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Krzysztof Ha■asa <khc@pm.waw.pl>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Acked-by: Wan ZongShun <mcuos.com@gmail.com>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The meminfo array has to be sorted before sanity_check_meminfo() in
arch/arm/mm/mmu.c is called for it to work properly. This also allows
for a simpler find_limits() in arch/arm/mm/init.c.
The sort is moved to arch/arm/kernel/setup.c because that's where the
meminfo array is populated. Eventually this should be improved upon
to make the memory bank parser a bit more robust against problems
such as overlapping memory ranges.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
These two syscalls were introduced during the last merge window.
Add the entries into the ARM call tables for them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When validating an event group, we call pmu->get_event_idx for each
group member in order to check that the group can be scheduled as a
unit on an empty PMU.
As a result of 3fc2c830 ("ARM: perf: remove event limit from
pmu_hw_events"), the used_mask member of struct cpu_hw_events must be
setup explicitly, something which we don't do for the fake cpu_hw_events
used for validation.
This patch sets up an empty used_mask for the fake validation
cpu_hw_events, preventing NULL deferences when trying to get the event
index.
Reported-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit b0e89590 ("ARM: PMU: move CPU PMU platform device handling and
init into perf") inadvertently removed the EXPORT_SYMBOL_GPL on
release_pmu, so out-of-tree modules can no longer play nice with perf,
even if they tried in the first place.
This patch re-exports the symbol.
Reported-by: Jon Medhurst (Tixy) <jon.medhurst@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Even when CONFIG_MULTI_IRQ_HANDLER is selected, the core code
requires the arch_irq_handler_default macro to be defined as
a fallback.
It turns out nobody is using that particular feature as both PXA
and shmobile have all their machine descriptors populated with
the interrupt handler, leaving unused code (or empty macros) in
their entry-macro.S file just to be able to compile entry-armv.S.
Make CONFIG_MULTI_IRQ_HANDLER exclusive wrt arch_irq_handler_default,
which allows to remove one test from the hot path. Also cleanup both
PXA and shmobile entry-macro.S.
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
People (Linus) objected to using -ENOSPC to signal not having enough
resources on the PMU to satisfy the request. Use -EINVAL.
Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-xv8geaz2zpbjhlx0svmpp28n@git.kernel.org
[ merged to newer kernel, fixed up MIPS impact ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
setup_processor copies the arch_name and elf_name fields out of the
selected proc_info_list into two fixed size buffers.
Since the proc_info_list structure is defined in a proc_*.S assembly
file, this can lead to subtle errors if the strings defined there are
too long (for example, corrupting the machine ID).
This patch uses snprintf instead of sprintf to ensure that these buffers
are not overrun.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
setup_mm_for_reboot() doesn't make use of its argument, so remove it.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the failure to reboot into machine_restart() to always catch
this condition, even if a platform decides to hook the restarting
via arm_pm_restart().
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Change 'soft_reboot' into a more generic 'restart_mode' variable,
allowing the default restart mode to be specified.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a restart hook to the machine_desc record so we don't have to
populate all platforms with init_early methods to initialize the
arm_pm_restart function pointer.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Recent gcc versions generate unaligned accesses by default on ARMv6 and
later processors. This patch ensures that the SCTLR.A bit is always
cleared on such processors to avoid kernel traping before
alignment_init() is called.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: John Linn <John.Linn@xilinx.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This reverts commit 2b034922af.
Will Deacon reports:
This is causing kexec to fail.
The symptoms are that the .init.text section is not loaded as part of the
new kernel image, so when we try to do the SMP/UP fixups we hit a whole sea
of poison left there by the previous kernel.
So my guess is that machine_kexec_prepare *is* too early for preparing the
reboot_code_buffer and, unless anybody has a good reason not to, I'd like to
revert the patch causing these problems.
Reported-by: Will Deacon <will.deacon@arm.com>
These files will fail to compile if we dont clean them up in advance
and have them include the appropriate headers they need.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Many of the core ARM kernel files are not modules, but just
including module.h for exporting symbols. Now these files can
use the lighter footprint export.h for this role.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Building these files does not reveal a hidden need for
any of these. Since module.h brings in the whole kitchen
sink, it just needlessly adds 30k+ lines to the cpp burden.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
To fix things like this:
arch/arm/mach-omap2/usb-tusb6010.c:58: error: implicit declaration of function 'memset'
arch/arm/kernel/leds.c:40: error: implicit declaration of function 'strcspn'
arch/arm/kernel/leds.c:40: warning: incompatible implicit declaration of built-in function 'strcspn'
arch/arm/kernel/leds.c:45: error: implicit declaration of function 'strncmp'
arch/arm/kernel/leds.c:55: error: implicit declaration of function 'strlen'
arch/arm/kernel/leds.c:55: warning: incompatible implicit declaration of built-in function 'strlen'
arch/arm/mach-omap2/clockdomain.c:52: error: implicit declaration of function 'strcmp'
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
It was implicitly getting it via an implicit presence of module.h
but when we clean that up, we'll get a bunch of lines like this:
arch/arm/kernel/ptrace.c:764: error: 'NT_PRSTATUS' undeclared here (not in a function)
arch/arm/kernel/ptrace.c:765: error: 'ELF_NGREG' undeclared here (not in a function)
arch/arm/kernel/ptrace.c:776: error: 'NT_PRFPREG' undeclared here (not in a function)
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock()
lockdep: Comment all warnings
lib: atomic64: Change the type of local lock to raw_spinlock_t
locking, lib/atomic64: Annotate atomic64_lock::lock as raw
locking, x86, iommu: Annotate qi->q_lock as raw
locking, x86, iommu: Annotate irq_2_ir_lock as raw
locking, x86, iommu: Annotate iommu->register_lock as raw
locking, dma, ipu: Annotate bank_lock as raw
locking, ARM: Annotate low level hw locks as raw
locking, drivers/dca: Annotate dca_lock as raw
locking, powerpc: Annotate uic->lock as raw
locking, x86: mce: Annotate cmci_discover_lock as raw
locking, ACPI: Annotate c3_lock as raw
locking, oprofile: Annotate oprofilefs lock as raw
locking, video: Annotate vga console lock as raw
locking, latencytop: Annotate latency_lock as raw
locking, timer_stats: Annotate table_lock as raw
locking, rwsem: Annotate inner lock as raw
locking, semaphores: Annotate inner lock as raw
locking, sched: Annotate thread_group_cputimer as raw
...
Fix up conflicts in kernel/posix-cpu-timers.c manually: making
cputimer->cputime a raw lock conflicted with the ABBA fix in commit
bcd5cff721 ("cputimer: Cure lock inversion").
The problem is related to the early enabling of interrupts and the
per cpu timer setup before the cpu is marked online. This doesn't
need to be done in order to call calibrate_delay().
calibrate_delay() monitors jiffies, which are updated from the CPU
which is waiting for the new CPU to set the online bit.
So simply calibrate_delay() can be called on the new CPU just from
the interrupt disabled region and move the local timer setup after
stored the cpu data and before enabling interrupts.
This solves both the cpu_online vs. cpu_active problem and the
affinity setting of the per cpu timers.
Signed-off-by: Thomas Gleinxer <tglx@linutronix.de>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch remove the hardcoded link between local timers and PPIs,
and convert the PPI users (TWD, MCT and MSM timers) to the new
*_percpu_irq interface. Also some collateral cleanup
(local_timer_ack() is gone, and the interrupt handler is strictly
private to each driver).
PPIs are now useable for more than just the local timers.
Additional testing by David Brown (msm8250 and msm8660) and
Shawn Guo (imx6q).
Cc: David Brown <davidb@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Brown <davidb@codeaurora.org>
Tested-by: David Brown <davidb@codeaurora.org>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
PPI handling is a bit of an odd beast. It uses its own low level
handling code and is hardwired to the local timers (hence lacking
a registration interface).
Instead, switch the low handling to the normal SPI handling code.
PPIs are handled by the handle_percpu_devid_irq flow.
This also allows the removal of some duplicated code.
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: Bryan Huntsman <bryanh@codeaurora.org>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Brown <davidb@codeaurora.org>
Tested-by: David Brown <davidb@codeaurora.org>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Upon adding new board LL debug support, if the resultant code
addition would not cause PC relative offset of "hexbuf" from
"adr r2, hexbuf" (+2) instruction to be representable in a
shifted 8-bit value (hence indirectly putting higher aligment
requirement on larger offsets), following error occurs,
arch/arm/kernel/debug.S: Assembler messages:
arch/arm/kernel/debug.S:138: Error: invalid constant (428) after fixup
Fix it by bringing "hexbuf" closer so that "adr"
can have the offset.
Signed-off-by: Afzal Mohammed <afzal@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This copy really don't need to do at the very second before the kernel
would crash.
Signed-off-by: Lei Wen <leiwen@marvell.com>
Acked-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Cache Type Register L1Ip field identifies I-caches with a PIPT
policy using the encoding 11b.
This patch extends the cache policy parsing to identify PIPT I-caches
correctly and prevent them from being treated as VIPT aliasing in cases
where they are sufficiently large.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Get rid of the mdesc pointer in the fixup function call. No one uses
the mdesc pointer, it shouldn't be modified anyway, and we can't wrap
it, so let's remove it.
Platform files found by:
$ regexp=$(git grep -h '\.fixup.*=' arch/arm |
sed 's!.*= *\([^,]*\),* *!\1!' | sort -u |
tr '\n' '|' | sed 's,|$,,;s,|,\\|,g')
$ git grep $regexp arch/arm
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM uses its own BUG() handler which makes its output slightly different
from other archtectures.
One of the problems is that the ARM implementation doesn't report the function
with the BUG() in it, but always reports the PC being in __bug(). The generic
implementation doesn't have this problem.
Currently we get something like:
kernel BUG at fs/proc/breakme.c:35!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
...
PC is at __bug+0x20/0x2c
With this patch it displays:
kernel BUG at fs/proc/breakme.c:35!
Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
...
PC is at write_breakme+0xd0/0x1b4
This implementation uses an undefined instruction to implement BUG, and sets up
a bug table containing the relevant information. Many versions of gcc do not
support %c properly for ARM (inserting a # when they shouldn't) so we work
around this using distasteful macro magic.
v1: Initial version to replace existing ARM BUG() implementation with something
more similar to other architectures.
v2: Add Thumb support, remove backtrace whitespace output changes. Change to
use macros instead of requiring the asm %d flag to work (thanks to
Dave Martin <dave.martin@linaro.org>)
v3: Remove old BUG() implementation in favor of this one.
Remove the Backtrace: message (will submit this separately).
Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time
thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always
define GENERIC_BUG this might be academic.)
Rebase to linux-2.6.git master.
v4: Allow BUGS in modules (these were not reported correctly in v3)
(thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.)
Remove __bug() as this is no longer needed.
v5: Add %progbits as the section flags.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, show_regs calls __backtrace which does
nothing if CONFIG_FRAME_POINTER is not set. Switch to
dump_stack which handles both CONFIG_FRAME_POINTER and
CONFIG_ARM_UNWIND correctly.
__backtrace is now superseded by dump_stack in general
and show_regs was the last caller so remove __backtrace
as well.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When v6 and >=v7 boards are supported in the same kernel, the
__und_usr code currently makes a build-time assumption that Thumb-2
instructions occurring in userspace don't need to be supported.
Strictly speaking this is incorrect.
This patch fixes the above case by doing a run-time check on the
CPU architecture in these cases. This only affects kernels which
support v6 and >=v7 CPUs together: plain v6 and plain v7 kernels
are unaffected.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Jon Medhurst <tixy@yxit.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When testing whether a Thumb-2 instruction is 32 bits long or not,
the masking done in order to test bits 11-15 of the first
instruction halfword won't affect the result of the comparison, so
remove it.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The CPU architecture really should not be changing at runtime, so
make it a global variable instead of a function.
The cpu_architecture() function declared in <asm/system.h> remains
the correct way to read this variable from C code.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Jon Medhurst <tixy@yxit.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
we save the l2x0 registers at the first initialization, and platform codes
can get them to restore l2x0 status after wakeup.
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The definition of __exception_irq_entry for
CONFIG_FUNCTION_GRAPH_TRACER=y needs linux/ftrace.h, but this creates a
circular dependency with it's current home in asm/system.h. Create
asm/exception.h and update all current users.
v4: - rebase to rmk/for-next
v3: - remove redundant includes of linux/ftrace.h
v2: - document the usage restricitions of __exception*
Cc: Zoltan Devai <zdevai@gmail.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to be able to handle localtimer directly from C code instead of
assembly code, introduce handle_local_timer(), which is modeled after
handle_IRQ().
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to be able to handle IPI directly from C code instead of
assembly code, introduce handle_IPI(), which is modeled after handle_IRQ().
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When Cortex-A9 MPCore resumes from Dormant or Shutdown modes,
SCU needs to be re-enabled. This patch removes __init annotation
from function scu_enable(), so that platform resume procedure can
call it to re-enable SCU.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
To allow booting Linux on a CPU with physical ID != 0, we need to
provide a mapping from the logical CPU number to the physical CPU
number.
This patch adds such a mapping and populates it during boot.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>