Commit Graph

101 Commits

Author SHA1 Message Date
David S. Miller
b6b7922fbd sparc64: Don't MAGIC_SYSRQ ifdef smp_fetch_global_regs and support code.
Based upon a report and initial patch by Friedrich Oslage.

The intention is to provide this facility for
__trigger_all_cpu_backtrace even if MAGIC_SYSRQ is not set.

The only part that should have MAGIC_SYSRQ ifdef protection is the
sparc_globalreg_op sysrq regitration and immediate code.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-09 16:25:26 -07:00
David S. Miller
ae583885bf sparc64: Remove all cpumask_t local variables in xcall dispatch.
All of the xcall delivery implementation is cpumask agnostic, so
we can pass around pointers to const cpumask_t objects everywhere.

The sad remaining case is the argument to arch_send_call_function_ipi().

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 16:56:15 -07:00
David S. Miller
ed4d9c66eb sparc64: Kill error_mask from hypervisor_xcall_deliver().
It can eat up a lot of stack space when NR_CPUS is large.
We retain some of it's functionality by reporting at least one
of the cpu's which are seen in error state.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 16:47:57 -07:00
David S. Miller
90f7ae8a55 sparc64: Build cpu list and mondo block at top-level xcall_deliver().
Then modify all of the xcall dispatch implementations get passed and
use this information.

Now all of the xcall dispatch implementations do not need to be mindful
of details such as "is current cpu in the list?" and "is cpu online?"

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 16:42:58 -07:00
David S. Miller
c02a5119e8 sparc64: Disable local interrupts around xcall_deliver_impl() invocation.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 16:18:40 -07:00
David S. Miller
deb16999e4 sparc64: Make all xcall_deliver's go through common helper function.
This just facilitates the next changeset where we'll be building
the cpu list and mondo block in this helper function.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 16:16:20 -07:00
David S. Miller
91a4231cc2 sparc64: Make smp_cross_call_masked() take a cpumask_t pointer.
Ideally this could be simplified further such that we could pass
the pointer down directly into the xcall_deliver() implementation.

But if we do that we need to do the "cpu_online(cpu)" and
"cpu != self" checks down in those functions.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:40 -07:00
David S. Miller
24445a4ac9 sparc64: Directly call xcall_deliver() in smp_start_sync_tick_client.
We know the cpu is online and not the current cpu here.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:40 -07:00
David S. Miller
1992663053 sparc64: Call xcall_deliver() directly in some cases.
For these cases the callers make sure:

1) The cpus indicated are online.

2) The current cpu is not in the list of indicated cpus.

Therefore we can pass a pointer to the mask directly.

One of the motivations in this transformation is to make use of
"&cpumask_of_cpu(cpu)" which evaluates to a pointer to constant
data in the kernel and thus takes up no stack space.

Hopefully someone in the future will change the interface of
arch_send_call_function_ipi() such that it passes a const cpumask_t
pointer so that this will optimize ever further.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:39 -07:00
David S. Miller
cd5bc89deb sparc64: Use cpumask_t pointers and for_each_cpu_mask_nr() in xcall_deliver.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:38 -07:00
David S. Miller
622824dbb5 sparc64: Use xcall_deliver() consistently.
There remained some spots still vectoring to the appropriate
*_xcall_deliver() function manually.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:37 -07:00
David S. Miller
5e0797e5b8 sparc64: Use function pointer for cross-call sending.
Initialize it using the smp_setup_processor_id() hook.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-04 13:51:37 -07:00
David S. Miller
9c636e30a3 sparc64: Kill smp_report_regs().
All the call sites are #if 0'd out and we have a much more
useful global cpu dumping facility these days.  smp_report_regs()
is way too verbose to be usable.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-31 01:06:02 -07:00
David S. Miller
d172ad18f9 sparc64: Convert to generic helpers for IPI function calls.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-17 23:44:50 -07:00
Jens Axboe
8691e5a8f6 smp_call_function: get rid of the unused nonatomic/retry argument
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:24:35 +02:00
Jens Axboe
3d44223327 Add generic helpers for arch IPI function calls
This adds kernel/smp.c which contains helpers for IPI function calls. In
addition to supporting the existing smp_call_function() in a more efficient
manner, it also adds a more scalable variant called smp_call_function_single()
for calling a given function on a single CPU only.

The core of this is based on the x86-64 patch from Nick Piggin, lots of
changes since then. "Alan D. Brunelle" <Alan.Brunelle@hp.com> has
contributed lots of fixes and suggestions as well. Also thanks to
Paul E. McKenney <paulmck@linux.vnet.ibm.com> for reviewing RCU usage
and getting rid of the data allocation fallback deadlock.

Acked-by: Ingo Molnar <mingo@elte.hu>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:21:34 +02:00
David S. Miller
93dae5b70e sparc64: Add global register dumping facility.
When a cpu really is stuck in the kernel, it can be often
impossible to figure out which cpu is stuck where.  The
worst case is when the stuck cpu has interrupts disabled.

Therefore, implement a global cpu state capture that uses
SMP message interrupts which are not disabled by the
normal IRQ enable/disable APIs of the kernel.

As long as we can get a sysrq 'y' to the kernel, we can
get a dump.  Even if the console interrupt cpu is wedged,
we can trigger it from userspace using /proc/sysrq-trigger

The output is made compact so that this facility is more
useful on high cpu count systems, which is where this
facility will likely find itself the most useful :)

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-20 00:33:45 -07:00
David S. Miller
81d6ec6b36 Revert "[SPARC64]: Wrap SMP IPIs with irq_enter()/irq_exit()."
This reverts commit 2664ef44cf.

Ingo moved around where the softlockup dependency sits
so this change is no longer necessary.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-03 21:00:55 -07:00
Huang Weiyi
8cd0ae3acc sparc64: remove duplicated include
Remove dulicated include file <asm/timer.h> in arch/sparc64/kernel/smp.c.

Signed-off-by: Huang Weiyi <hwy@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-29 03:19:38 -07:00
David S. Miller
e2fdd7fd99 sparc: Add kgdb support.
Current limitations:

1) On SMP single stepping has some fundamental issues,
   shared with other sw single-step architectures such
   as mips and arm.

2) On 32-bit sparc we don't support SMP kgdb yet.  That
   requires some reworking of the IPI mechanisms and
   infrastructure on that platform.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-29 02:38:50 -07:00
David S. Miller
2664ef44cf [SPARC64]: Wrap SMP IPIs with irq_enter()/irq_exit().
Otherwise all sorts of bad things can happen, including
spurious softlockup reports.

Other platforms have this same bug, in one form or
another, just don't see the issue because they
don't sleep as long as sparc64 can in NOHZ.

Thanks to some brilliant debugging by Peter Zijlstra.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-25 03:11:37 -07:00
David S. Miller
b97094560b [SPARC64]: Call real_setup_per_cpu_areas() earlier and use lmb_alloc().
We have to do it like this before we can move the PROM and MDESC device
tree code over to using lmb_alloc().

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-23 23:32:11 -07:00
David S. Miller
cf3d7c1ef4 [SPARC64]: Fix sparse warnings in arch/sparc64/kernel/time.c
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-26 01:11:55 -07:00
David S. Miller
64658743fd [SPARC64]: Remove most limitations to kernel image size.
Currently kernel images are limited to 8MB in size, and this causes
problems especially when enabling features that take up a lot of
kernel image space such as lockdep.

The code now will align the kernel image size up to 4MB and map that
many locked TLB entries.  So, the only practical limitation is the
number of available locked TLB entries which is 16 on Cheetah and 64
on pre-Cheetah sparc64 cpus.  Niagara cpus don't actually have hw
locked TLB entry support.  Rather, the hypervisor transparently
provides support for "locked" TLB entries since it runs with physical
addressing and does the initial TLB miss processing.

Fully utilizing this change requires some help from SILO, a patch for
which will be submitted to the maintainer.  Essentially, SILO will
only currently map up to 8MB for the kernel image and that needs to be
increased.

Note that neither this patch nor the SILO bits will help with network
booting.  The openfirmware code will only map up to a certain amount
of kernel image during a network boot and there isn't much we can to
about that other than to implemented a layered network booting
facility.  Solaris has this, and calls it "wanboot" and we may
implement something similar at some point.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-21 17:01:38 -07:00
Sam Ravnborg
0f7f22d9a4 [SPARC64]: Fix cpu trampoline et al. mismatch warnings.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-20 22:22:16 -08:00
Adrian Bunk
6c81c32f96 calibrate_delay() must be __cpuinit
calibrate_delay() must be __cpuinit, not __{dev,}init.

I've verified that this is correct for all users.

While doing the latter, I also did the following cleanups:
- remove pointless additional prototypes in C files
- ensure all users #include <linux/delay.h>

This fixes the following section mismatches with CONFIG_HOTPLUG=n,
CONFIG_HOTPLUG_CPU=y:

WARNING: vmlinux.o(.text+0x1128d): Section mismatch: reference to .init.text.1:calibrate_delay (between 'check_cx686_slop' and 'set_cx86_reorder')
WARNING: vmlinux.o(.text+0x25102): Section mismatch: reference to .init.text.1:calibrate_delay (between 'smp_callin' and 'cpu_coregroup_map')

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Christian Zankel <chris@zankel.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-06 10:41:08 -08:00
David S. Miller
0de56d1ab8 [SPARC64]: Fix endless loop in cheetah_xcall_deliver().
We need to mask out the proper bits when testing the dispatch status
register else we can see unrelated NACK bits from previous cross call
sends.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-12-12 07:36:36 -08:00
Joe Perches
519c4d2deb [SPARC64]: Add missing "space"
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-12-05 05:37:58 -08:00
David S. Miller
d979f1792d [SPARC64]: __inline__ --> inline
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-27 00:13:04 -07:00
Mike Travis
d5a7430ddc Convert cpu_sibling_map to be a per cpu variable
Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
variable.  This saves sizeof(cpumask_t) * NR unused cpus.  Access is mostly
from startup and CPU HOTPLUG functions.

Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:42:50 -07:00
Akinobu Mita
1177bf9704 [SPARC64]: check fork_idle() error
Check the return value of fork_idle() to catch error.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 14:55:59 -07:00
David S. Miller
b434e71933 [SPARC64]: Fix memory leak when cpu hotplugging.
Every time a cpu is added via hotplug, we allocate the per-cpu MONDO
queues but we never free them up.  Freeing isn't easy since the first
cpu gets this memory from bootmem.

Therefore, the simplest thing to do to fix this bug is to allocate the
queues for all possible cpus at boot time.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-08 17:33:52 -07:00
David S. Miller
e0204409df [SPARC64]: dr-cpu unconfigure support.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:05:32 -07:00
David S. Miller
39dd992aee [SPARC64]: Clear cpu_{core,sibling}_map[] in smp_fill_in_sib_core_maps()
When we hot-plug in new cpus, the core_id and proc_id of existing
cpus can change.  So in order to set the cpu groups correctly we
need to clear the maps out completely first.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:05:19 -07:00
David S. Miller
b37d40d175 [SPARC64]: Fix leak when DR added cpu does not bootup.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:05:15 -07:00
David S. Miller
8b99cfb8cc [SPARC64]: More sensible udelay implementation.
Take a page from the powerpc folks and just calculate the
delay factor directly.

Since frequency scaling chips use a system-tick register,
the value is going to be the same system-wide.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:05:02 -07:00
David S. Miller
27a2ef382c [SPARC64]: SMP build fixes.
With the move of ldom_startcpu_cpuid() into smp.c some other
things need to follow along:

1) smp.c is not a driver so we can't use "PFX" macro in the
   printk calls.

2) smp.c now needs asm/io.h and asm/hvtramp.h, ds.c no longer
   does

3) kimage_addr_to_ra() also needs to move into smp.c

While we're here, update copyright info and my email address
in smp.c

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:04:58 -07:00
David S. Miller
b14f5c100c [SPARC64]: Fix build regressions added by dr-cpu changes.
Do not select HOTPLUG_CPU from SUN_LDOMS, that causes
HOTPLUG_CPU to be selected even on non-SMP which is
illegal.

Only build hvtramp.o when SMP, just like trampoline.o

Protect dr-cpu code in ds.c with HOTPLUG_CPU.

Likewise move ldom_startcpu_cpuid() to smp.c and protect
it and the call site with SUN_LDOMS && HOTPLUG_CPU.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:04:49 -07:00
David S. Miller
4f0234f4f9 [SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.

When new cpus are configured, the machine description is
updated.  When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus.  That code
also proceeds to update the SMT/multi-core scheduling bitmaps.

cpu_up() does all the work and we return the status back
over the DS channel.

CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:

1) A new trampoline mechanism.  CPUs are booted straight
   out of the hypervisor with MMU disabled and running in
   physical addresses with no mappings installed in the TLB.

   The new hvtramp.S code sets up the critical cpu state,
   installs the locked TLB mappings for the kernel, and
   turns the MMU on.  It then proceeds to follow the logic
   of the existing trampoline.S SMP cpu bringup code.

2) All calls into OBP have to be disallowed when domaining
   is enabled.  Since cpus boot straight into the kernel from
   the hypervisor, OBP has no state about that cpu and therefore
   cannot handle being invoked on that cpu.

   Luckily it's only a handful of interfaces which can be called
   after the OBP device tree is obtained.  For example, rebooting,
   halting, powering-off, and setting options node variables.

CPU removal support will require some infrastructure changes
here.  Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue.  workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed.  Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.

Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed.  So new code will be needed
to walk the active INO list and retarget those cpus as-needed.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-16 04:04:40 -07:00
Ingo Molnar
0437e109e1 sched: zap the migration init / cache-hot balancing code
the SMP load-balancer uses the boot-time migration-cost estimation
code to attempt to improve the quality of balancing. The reason for
this code is that the discrete priority queues do not preserve
the order of scheduling accurately, so the load-balancer skips
tasks that were running on a CPU 'recently'.

this code is fundamental fragile: the boot-time migration cost detector
doesnt really work on systems that had large L3 caches, it caused boot
delays on large systems and the whole cache-hot concept made the
balancing code pretty undeterministic as well.

(and hey, i wrote most of it, so i can say it out loud that it sucks ;-)

under CFS the same purpose of cache affinity can be achieved without
any special cache-hot special-case: tasks are sorted in the 'timeline'
tree and the SMP balancer picks tasks from the left side of the
tree, thus the most cache-cold task is balanced automatically.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-07-09 18:51:57 +02:00
David S. Miller
a2f9f6bbb3 [SPARC64]: Fix {mc,smt}_capable().
It's not just sun4v hypervisor platforms that should return true
for this, sun4u with UltraSPARC-IV should return true too.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-06-04 21:50:05 -07:00
David S. Miller
f78eae2e6f [SPARC64]: Proper multi-core scheduling support.
The scheduling domain hierarchy is:

   all cpus -->
      cpus that share an instruction cache -->
          cpus that share an integer execution unit

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-06-04 21:50:00 -07:00
David S. Miller
22adb358e8 [SPARC64]: Eliminate NR_CPUS limitations.
Cheetah systems can have cpuids as large as 1023, although physical
systems don't have that many cpus.

Only three limitations existed in the kernel preventing arbitrary
NR_CPUS values:

1) dcache dirty cpu state stored in page->flags on
   D-cache aliasing platforms.  With some build time
   calculations and some build-time BUG checks on
   page->flags layout, this one was easily solved.

2) The cheetah XCALL delivery code could only handle
   a cpumask with up to 32 cpus set.  Some simple looping
   logic clears that up too.

3) thread_info->cpu was a u8, easily changed to a u16.

There are a few spots in the kernel that still put NR_CPUS
sized arrays on the kernel stack, but that's not a sparc64
specific problem.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-05-29 02:49:49 -07:00
David S. Miller
5cbc307373 [SPARC64]: Use machine description and OBP properly for cpu probing.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-05-29 02:49:41 -07:00
David S. Miller
17f34f0ec9 [SPARC64]: Add missing cpus_empty() check in hypervisor xcall handling.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-05-14 02:01:52 -07:00
Randy Dunlap
e63340ae6b header cleaning: don't include smp_lock.h when not used
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.

Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).

Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 11:15:07 -07:00
Jeremy Fitzhardinge
b6e3590f81 [PATCH] x86: Allow percpu variables to be page-aligned
Let's allow page-alignment in general for per-cpu data (wanted by Xen, and
Ingo suggested KVM as well).

Because larger alignments can use more room, we increase the max per-cpu
memory to 64k rather than 32k: it's getting a little tight.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2007-05-02 19:27:12 +02:00
David S. Miller
112f48716d [SPARC64]: Add clocksource/clockevents support.
I'd like to thank John Stul and others for helping
me along the way.

A lot of cleanups fell out of this.  For example, the get_compare()
tick_op was totally unused, so was deleted.  And the most often used
tick_op members were grouped together for cache-friendlyness.

The sparc64 TSC is given to the kernel as a one-shot timer.

tick_ops->init_timer() simply turns off the privileged bit in
the tick register (when possible), and disables the interrupt
by setting bit 63 in the compare register.  The ->disable_irq()
op also sets this bit.

tick_ops->add_compare() is changed to:

1) Add the given delta to "tick" not to "compare"
2) Return a boolean which, if true, means that the tick
   value read after writing the compare value was found
   to have incremented past the initial tick value.  This
   mirrors logic used in the HPET driver's ->next_event()
   method.

Each tick_ops implementation also now provides a name string.
And we feed this into the clocksource and clockevents layers.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 01:54:15 -07:00
David S. Miller
777a447529 [SPARC64]: Unify timer interrupt handler.
Things were scattered all over the place, split between
SMP and non-SMP.

Unify it all so that dyntick support is easier to add.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 01:54:11 -07:00
Gautham R Shenoy
b282b6f8a8 [PATCH] Change cpu_up and co from __devinit to __cpuinit
Compiling the kernel with CONFIG_HOTPLUG = y and CONFIG_HOTPLUG_CPU = n
with CONFIG_RELOCATABLE = y generates the following modpost warnings

WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141b7d) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141b9c) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.text:__cpu_up
from .text between '_cpu_up' (at offset 0xc0141bd8) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c05) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c26) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c37) and 'cpu_up'

This is because cpu_up, _cpu_up and __cpu_up (in some architectures) are
defined as __devinit
AND
__cpu_up calls some __cpuinit functions.

Since __cpuinit would map to __init with this kind of a configuration,
we get a .text refering .init.data warning.

This patch solves the problem by converting all of __cpu_up, _cpu_up
and cpu_up from __devinit to __cpuinit. The approach is justified since
the callers of cpu_up are either dependent on CONFIG_HOTPLUG_CPU or
are of __init type.

Thus when CONFIG_HOTPLUG_CPU=y, all these cpu up functions would land up
in .text section, and when CONFIG_HOTPLUG_CPU=n, all these functions would
land up in .init section.

Tested on a i386 SMP machine running linux-2.6.20-rc3-mm1.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-11 18:18:20 -08:00