Commit Graph

33 Commits

Author SHA1 Message Date
Scott Wood
beb2dc0a7a powerpc: Convert some mftb/mftbu into mfspr
Some CPUs (such as e500v1/v2) don't implement mftb and will take a
trap.  mfspr should work on everything that has a timebase, and is the
preferred instruction according to ISA v2.06.

Currently we get away with mftb on 85xx because the assembler converts
it to mfspr due to -Wa,-me500.  However, that flag has other effects
that are undesireable for certain targets (e.g.  lwsync is converted to
sync), and is hostile to multiplatform kernels.  Thus we would like to
stop setting it for all e500-family builds.

mftb/mftbu instances which are in 85xx code or common code are
converted.  Instances which will never run on 85xx are left alone.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2013-08-20 19:33:12 -05:00
Scott Wood
d52459ca30 powerpc/fsl-booke: Work around erratum A-006958
Erratum A-006598 says that 64-bit mftb is not atomic -- it's subject
to a similar race condition as doing mftbu/mftbl on 32-bit.  The lower
half of timebase is updated before the upper half; thus, we can share
the workaround for a similar bug on Cell.  This workaround involves
looping if the lower half of timebase is zero, thus avoiding the need
for a scratch register (other than CR0).  This workaround must be
avoided when the timebase is frozen, such as during the timebase sync
code.

This deals with kernel and vdso accesses, but other userspace accesses
will of course need to be fixed elsewhere.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2013-08-20 15:45:49 -05:00
Andy Fleming
39fd40274d powerpc: Convert platforms to smp_generic_cpu_bootable
T4, Cell, powernv, and pseries had the same implementation, so switch
them to use a generic version. A2 apparently had a version, but
removed it at some point, so we remove the declaration, too.

Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14 14:56:57 +10:00
Paul Gortmaker
061d19f279 powerpc: Delete __cpuinit usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications.  For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out.  Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

This removes all the powerpc uses of the __cpuinit macros.  There
are no __CPUINIT users in assembly files in powerpc.

[1] https://lkml.org/lkml/2013/5/20/589

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Josh Boyer <jwboyer@gmail.com>
Cc: Matt Porter <mporter@kernel.crashing.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-07-01 11:10:36 +10:00
Chen-Hui Zhao
ddb487dca3 powerpc/85xx: fix a bug with the parameter of mpic_reset_core()
mpic_reset_core() need a logical cpu number instead of physical.

Signed-off-by: Zhao Chenhui <chenhui.zhao@freescale.com>
Signed-off-by: Li Yang <leoli@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2013-04-03 11:43:02 -05:00
York Sun
bc15236fbe powerpc/mpc85xx: Change spin table to cached memory
ePAPR v1.1 requires the spin table to be in cached memory. So we need
to change the call argument of ioremap to enable cache and coherence.
We also flush the cache after writing to spin table to keep it compatible
with previous cache-inhibit spin table. Flushing before and after
accessing spin table is recommended by ePAPR.

Signed-off-by: York Sun <yorksun@freescale.com>
Acked-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2012-11-25 07:00:31 -06:00
Zhao Chenhui
d0832a7507 powerpc/85xx: add HOTPLUG_CPU support
Add support to disable and re-enable individual cores at runtime on
MPC85xx/QorIQ SMP machines. Currently support e500v1/e500v2 core.

MPC85xx machines use ePAPR spin-table in boot page for CPU kick-off.  This
patch uses the boot page from bootloader to boot core at runtime.  It
supports 32-bit and 36-bit physical address.

Signed-off-by: Li Yang <leoli@freescale.com>
Signed-off-by: Jin Qing <b24347@freescale.com>
Signed-off-by: Zhao Chenhui <chenhui.zhao@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2012-09-12 14:57:08 -05:00
Zhao Chenhui
bf34526374 powerpc/85xx: implement hardware timebase sync
Do hardware timebase sync. Firstly, stop all timebases, and transfer the
timebase value of the boot core to the other core. Finally, start all
timebases.

Only apply to dual-core chips, such as MPC8572, P2020, etc.

Signed-off-by: Zhao Chenhui <chenhui.zhao@freescale.com>
Signed-off-by: Li Yang <leoli@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2012-09-12 14:57:08 -05:00
Zhao Chenhui
15f34eb123 powerpc/85xx: Replace epapr spin table macros/defines with a struct
Signed-off-by: Zhao Chenhui <chenhui.zhao@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2012-09-12 14:57:08 -05:00
Kyle Moffett
582d3e0994 powerpc/85xx: Move mpc85xx_smp_init() decl to a new "smp.h"
This removes a bunch of "extern" declarations and CONFIG_SMP ifdefs.

Signed-off-by: Kyle Moffett <Kyle.D.Moffett@boeing.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07 13:43:06 +11:00
Matthew McClintock
43a327b79c powerpc/85xx: Make kexec to interate over online cpus
This is not strictly required, because this iterates over logical
cpus and they are not (currently) discontigous. But, it's cleaner
code and more obvious what is going on

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-11-03 13:12:29 -05:00
Kumar Gala
4511680613 powerpc/85xx: Setup secondary cores PIR with hard SMP id
Normally logical and hard cpu ID are the same, however in same cases like
on the P3060 they may differ.  Where the logical is 0..5, the hard id
goes 0,1,4..7.  This can causes issues for places we utilize PIR to index
into array like in debug exception handlers for finding the exception
stack.

Move to setting up PIR with hard_smp_processor_id fixes the issue.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-10-13 10:19:22 -05:00
Matthew McClintock
de423ff5b0 powerpc/85xx: Fix support for enabling doorbells for IPIs
Commit 765342526246c97600e5344c0949824d94bb51c3 made some small changes to
IPI, message_pass in smp_ops was initialized to NULL for other platforms
but not for 85xx which causes us to always use the mpic for IPI's even
if we support doorbells in HW.

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-10-11 23:26:11 -05:00
Laurentiu TUDOR
2647aa19fb powerpc/85xx: Remove stale BUG_ON in mpc85xx_smp_init
Under the FSL Hypervisor we triggered a BUG_ON in mpc85xx_smp_init that
expected smp_ops.message_pass to be explicity set.  However recent
changes allows smp_ops.message_pass to be NULL and handled by default
code.  Thus the BUG_ON isn't relevant anymore.

Signed-off-by: Laurentiu TUDOR <Laurentiu.Tudor@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-07-08 00:21:33 -05:00
Scott Wood
dc2c9c52b6 powerpc/85xx: Set up doorbells even with no mpic
In cases like when the platform is used under hypervisor we will NOT
have an MPIC controller but still want doorbells setup.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-06-22 21:44:54 -05:00
Paul Mackerras
9ca980dce5 powerpc: Avoid extra indirect function call in sending IPIs
On many platforms (including pSeries), smp_ops->message_pass is always
smp_muxed_ipi_message_pass.  This changes arch/powerpc/kernel/smp.c so
that if smp_ops->message_pass is NULL, it calls smp_muxed_ipi_message_pass
directly.

This means that a platform doesn't need to set both .message_pass and
.cause_ipi, only one of them.  It is a slight performance improvement
in that it gets rid of an indirect function call at the expense of a
predictable conditional branch.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-06-20 11:21:32 +10:00
Milton Miller
23d72bfd8f powerpc: Consolidate ipi message mux and demux
Consolidate the mux and demux of ipi messages into smp.c and call
a new smp_ops callback to actually trigger the ipi.

The powerpc architecture code is optimised for having 4 distinct
ipi triggers, which are mapped to 4 distinct messages (ipi many, ipi
single, scheduler ipi, and enter debugger).  However, several interrupt
controllers only provide a single software triggered interrupt that
can be delivered to each cpu.  To resolve this limitation, each smp_ops
implementation created a per-cpu variable that is manipulated with atomic
bitops.  Since these lines will be contended they are optimialy marked as
shared_aligned and take a full cache line for each cpu.  Distro kernels
may have 2 or 3 of these in their config, each taking per-cpu space
even though at most one will be in use.

This consolidation removes smp_message_recv and replaces the single call
actions cases with direct calls from the common message recognition loop.
The complicated debugger ipi case with its muxed crash handling code is
moved to debug_ipi_action which is now called from the demux code (instead
of the multi-message action calling smp_message_recv).

I put a call to reschedule_action to increase the likelyhood of correctly
merging the anticipated scheduler_ipi() hook coming from the scheduler
tree; that single required call can be inlined later.

The actual message decode is a copy of the old pseries xics code with its
memory barriers and cache line spacing, augmented with a per-cpu unsigned
long based on the book-e doorbell code.  The optional data is set via a
callback from the implementation and is passed to the new cause-ipi hook
along with the logical cpu number.  While currently only the doorbell
implemntation uses this data it should be almost zero cost to retrieve and
pass it -- it adds a single register load for the argument from the same
cache line to which we just completed a store and the register is dead
on return from the call.  I extended the data element from unsigned int
to unsigned long in case some other code wanted to associate a pointer.

The doorbell check_self is replaced by a call to smp_muxed_ipi_resend,
conditioned on the CPU_DBELL feature.  The ifdef guard could be relaxed
to CONFIG_SMP but I left it with BOOKE for now.

Also, the doorbell interrupt vector for book-e was not calling irq_enter
and irq_exit, which throws off cpu accounting and causes code to not
realize it is running in interrupt context.  Add the missing calls.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-05-19 15:31:03 +10:00
Michael Ellerman
de30097476 powerpc/smp: smp_ops->kick_cpu() should be able to fail
When we start a cpu we use smp_ops->kick_cpu(), which currently
returns void, it should be able to fail. Convert it to return
int, and update all uses.

Convert all the current error cases to return -ENOENT, which is
what would eventually be returned by __cpu_up() currently when
it doesn't detect the cpu as coming up in time.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-04-20 17:01:18 +10:00
Kumar Gala
decbb280bb powerpc/85xx: Fix writing to spin table 'cpu-release-addr' on ppc64e
If the spin table is located in the linear mapping (which can happen if
we have 4G or more of memory) we need to access the spin table via a
cacheable coherent mapping like we do on ppc32 (and do explicit cache
flush).

See the following commit for the ppc32 version of this issue:

commit d1d47ec6e6
Author: Peter Tyser <ptyser@xes-inc.com>
Date:   Fri Dec 18 16:50:37 2009 -0600

    powerpc/85xx: Fix SMP when "cpu-release-addr" is in lowmem

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2011-03-15 09:29:56 -05:00
Matthew McClintock
677de42558 powerpc/85xx: flush dcache before resetting cores
When we do an mpic_reset_core we need to make sure the dcache is flushed.

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-10-14 00:52:52 -05:00
Matthew McClintock
5d69296163 powerpc/85xx: Minor fixups for kexec on 85xx
Make kexec_down_cpus atmoic since it will be incremented by all cores as
they are coming down.

Remove duplicate calls to mpc85xx_smp_kexec_down, now it's called by the
crash and normal kexec pathway only once.

Increase the timeout to wait for other cores to shutdown.

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-10-14 00:52:50 -05:00
Matthew McClintock
edb8580010 powerpc/85xx: Remove call to mpic_teardown_this_cpu in kexec
We no longer need to call this explicitly as a generic version is called
by default.

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-10-14 00:52:48 -05:00
Kumar Gala
5b8544c38e powerpc/ppc64e: Fix link problem when building ppc64e_defconfig
arch/powerpc/platforms/built-in.o:(.toc1+0x18): undefined reference to `__early_start'

This is due to the 85xx/smp.c not handling the 64-bit side properly.  We
need to set the entry point for secondary cores on ppc64e to
generic_secondary_smp_init instead of __early_start that we due on ppc32.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-10-08 10:55:29 -05:00
Matthew McClintock
f933a41e41 powerpc/85xx: kexec for SMP 85xx BookE systems
Adds support for kexec on 85xx machines for the BookE platform.
Including support for SMP machines

Based off work from Maxim Uvarov <muvarov@mvista.com>
Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-08-02 14:36:28 -05:00
Benjamin Herrenschmidt
b9f1cd71db powerpc/book3e: More doorbell cleanups. Sample the PIR register
The doorbells use the content of the PIR register to match messages
from other CPUs. This may or may not be the same as our linux CPU
number, so using that as the "target" is no right.

Instead, we sample the PIR register at boot on every processor
and use that value subsequently when sending IPIs.

We also use a per-cpu message mask rather than a global array which
should limit cache line contention.

Note: We could use the CPU number in the device-tree instead of
the PIR register, as they are supposed to be equivalent. This
might prove useful if doorbells are to be used to kick CPUs out
of FW at boot time, thus before we can sample the PIR. This is
however not the case now and using the PIR just works.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-07-09 15:29:53 +10:00
Peter Tyser
d1d47ec6e6 powerpc/85xx: Fix SMP when "cpu-release-addr" is in lowmem
Recent U-Boot commit 5ccd29c3679b3669b0bde5c501c1aa0f325a7acb caused
the "cpu-release-addr" device tree property to contain the physical RAM
location that secondary cores were spinning at.  Previously, the
"cpu-release-addr" property contained a value referencing the boot page
translation address range of 0xfffffxxx, which then indirectly accessed
RAM.

The "cpu-release-addr" is currently ioremapped and the secondary cores
kicked.  However, due to the recent change in "cpu-release-addr", it
sometimes points to a memory location in low memory that cannot be
ioremapped.  For example on a P2020-based board with 512MB of RAM the
following error occurs on bootup:

  <...>
  mpic: requesting IPIs ...
  __ioremap(): phys addr 0x1ffff000 is RAM lr c05df9a0
  Unable to handle kernel paging request for data at address 0x00000014
  Faulting instruction address: 0xc05df9b0
  Oops: Kernel access of bad area, sig: 11 [#1]
  SMP NR_CPUS=2 P2020 RDB
  Modules linked in:
  <... eventual kernel panic>

Adding logic to conditionally ioremap or access memory directly resolves
the issue.

Signed-off-by: Peter Tyser <ptyser@xes-inc.com>
Signed-off-by: Nate Case <ncase@xes-inc.com>
Reported-by: Dipen Dudhat <B09055@freescale.com>
Tested-by: Dipen Dudhat <B09055@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-02-13 14:23:24 -06:00
Kumar Gala
757cbd46d1 powerpc/85xx: Fix SMP compile error and allow NULL for smp_ops
The following commit introduced a compile error since it removed
the implementation of smp_85xx_basic_setup:

commit 77c0a700c1
Author: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Date:   Fri Aug 28 14:25:04 2009 +1000

    powerpc: Properly start decrementer on BookE secondary CPUs

Make it so that smp_ops probe() and setup_cpu() can be set to NULL.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-11 11:27:57 +10:00
Benjamin Herrenschmidt
77c0a700c1 powerpc: Properly start decrementer on BookE secondary CPUs
This moves the code to start the decrementer on 40x and BookE into
a separate function which is now called from time_init() and
secondary_time_init(), before the respective clock sources are
registered. We also remove the 85xx specific code for doing it
from the platform code.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28 14:25:04 +10:00
Benjamin Herrenschmidt
cf54dc7cd4 powerpc: Move definitions of secondary CPU spinloop to header file
Those definitions are currently declared extern in the .c file where
they are used, move them to a header file instead.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20 10:12:44 +10:00
Kumar Gala
cb1ffb6204 powerpc/85xx: Fix issue found by lockdep trace in smp_85xx_kick_cpu
lockdep trace found the following:

------------[ cut here ]------------
Badness at c007baf0 [verbose debug info unavailable]
NIP: c007baf0 LR: c007bad8 CTR: 00000000
REGS: ef855e00 TRAP: 0700   Tainted: G        W
(2.6.30-06736-g12a31df-dirty)
MSR: 00021000 <ME,CE>  CR: 24044022  XER: 20000000
TASK = ef858000[1] 'swapper' THREAD: ef854000 CPU: 0
GPR00: 00000000 ef855eb0 ef858000 00000001 000000d0 f1000000 ffbc8000 ffffffff
GPR08: 000000d0 c0760000 c0710000 00000007 2fffffff 1004a388 7ffd9400 00000000
GPR16: 00000000 7ffcd100 7ffcd100 7ffcd100 c059cd78 c075c498 c057da7c ffffffff
GPR24: ffbc8000 f1000000 00000001 c00bf8b0 c07595d4 000000d0 00021000 000000d0
NIP [c007baf0] lockdep_trace_alloc+0xc0/0xf0
LR [c007bad8] lockdep_trace_alloc+0xa8/0xf0
Call Trace:
[ef855eb0] [c007ba60] lockdep_trace_alloc+0x30/0xf0 (unreliable)
[ef855ec0] [c00cb3ac] kmem_cache_alloc+0x2c/0xf0
[ef855ee0] [c00bf8b0] __get_vm_area_node+0x80/0x1c0
[ef855f10] [c0017580] __ioremap_caller+0x1d0/0x1e0
[ef855f40] [c057da7c] smp_85xx_kick_cpu+0x64/0x124
[ef855f60] [c0599180] __cpu_up+0xd0/0x1a4
[ef855f80] [c05997c4] cpu_up+0x14c/0x1e0
[ef855fc0] [c05732a0] kernel_init+0x100/0x1c4
[ef855ff0] [c0011524] kernel_thread+0x4c/0x68
Instruction dump:
8009c174 2f800000 409e0048 73c08000 40820040 4818980d 2f830000 419effa0
3d20c076 8009c388 2f800000 409eff90 <0fe00000> 4bffff88 60000000 60000000

We were calling ioremap after we local_irq_restore(flags).  A simple
reorder fixes the problem.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-06-23 08:09:57 -05:00
Kumar Gala
563fdd4a0a powerpc/85xx: Update smp support to handle doorbells and non-mpic init
Use device tree to determine if we actually have an MPIC and use
CPU feature to decide if we should use doorbells for IPIs.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-03-11 06:44:56 -05:00
Julia Lawall
870029a682 powerpc/85xx: Add local_irq_restore in error handling code
There is a call to local_irq_restore in the normal exit case, so it would
seem that there should be one on an error return as well.

The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@@
expression l;
expression E,E1,E2;
@@

local_irq_save(l);
... when != local_irq_restore(l)
    when != spin_unlock_irqrestore(E,l)
    when any
    when strict
(
if (...) { ... when != local_irq_restore(l)
               when != spin_unlock_irqrestore(E1,l)
+   local_irq_restore(l);
    return ...;
}
|
if (...)
+   {local_irq_restore(l);
    return ...;
+   }
|
spin_unlock_irqrestore(E2,l);
|
local_irq_restore(l);
)
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-30 11:35:30 -06:00
Kumar Gala
d5b26db2cf powerpc/85xx: Add support for SMP initialization
Added 85xx specifc smp_ops structure.  We use ePAPR style boot release
and the MPIC for IPIs at this point.

Additionally added routines for secondary cpu entry and initializtion.

Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-03 08:19:20 -06:00