Commit Graph

25 Commits

Author SHA1 Message Date
Thomas Gleixner
00d1a39e69 preempt: Make PREEMPT_ACTIVE generic
No point in having this bit defined by architecture.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de
2013-11-13 20:21:47 +01:00
Thomas Gleixner
ee761f629d arch: Consolidate tsk_is_polling()
Move it to a common place. Preparatory patch for implementing
set/clear for the idle need_resched poll implementation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Link: http://lkml.kernel.org/r/20130321215233.446034505@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2013-04-08 17:39:22 +02:00
Al Viro
16a8016372 sanitize tsk_is_polling()
Make default just return 0.  The current default (checking
TIF_POLLING_NRFLAG) is taken to architectures that need it;
ones that don't do polling in their idle threads don't need
to defined TIF_POLLING_NRFLAG at all.

ia64 defined both TS_POLLING (used by its tsk_is_polling())
and TIF_POLLING_NRFLAG (not used at all).  Killed the latter...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-10-01 09:58:13 -04:00
Al Viro
edd63a2763 set_restore_sigmask() is never called without SIGPENDING (and never should be)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-06-01 12:58:50 -04:00
Al Viro
4ebefe3ec7 new helpers: {clear,test,test_and_clear}_restore_sigmask()
helpers parallel to set_restore_sigmask(), used in the next commits

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-06-01 12:58:47 -04:00
Linus Torvalds
1d767cae4d SuperH updates for 3.5-rc1 merge window
- New CPUs: SH7734 (SH-4A), SH7264 and SH7269 (SH-2A)
 - New boards: RSK2+SH7264, RSK2+SH7269
 - Unbreaking kgdb for SMP
 - Consolidation of _32/_64 page fault handling.
 - watchdog and legacy DMA chainsawing, part 1
 - Conversion to evt2irq() hwirq lookup, to support relocation
   of vectored IRQs for irqdomains.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iEYEABECAAYFAk+7gb4ACgkQGkmNcg7/o7hoPQCgvdQGi9dk3ewIBX9LQ9mL6L81
 ls8An3PMKi9fHANnztVUAheP1U2DEanJ
 =v/VS
 -----END PGP SIGNATURE-----

Merge tag 'sh-for-linus' of git://github.com/pmundt/linux-sh

Pull SuperH updates from Paul Mundt:
 - New CPUs: SH7734 (SH-4A), SH7264 and SH7269 (SH-2A)
 - New boards: RSK2+SH7264, RSK2+SH7269
 - Unbreaking kgdb for SMP
 - Consolidation of _32/_64 page fault handling.
 - watchdog and legacy DMA chainsawing, part 1
 - Conversion to evt2irq() hwirq lookup, to support relocation of
   vectored IRQs for irqdomains.

* tag 'sh-for-linus' of git://github.com/pmundt/linux-sh: (98 commits)
  sh: intc: Kill off special reservation interface.
  sh: Enable PIO API for hp6xx and se770x.
  sh: Kill off machvec IRQ hinting.
  sh: dma: More legacy cpu dma chainsawing.
  sh: Kill off MAX_DMA_ADDRESS leftovers.
  sh: Tidy up some of the cpu legacy dma header mess.
  sh: Move sh4a dma header from cpu-sh4 to cpu-sh4a.
  sh64: Fix up vmalloc fault range check.
  Revert "sh: Ensure fixmap and store queue space can co-exist."
  serial: sh-sci: Fix for port types without BRI interrupts.
  sh: legacy PCI evt2irq migration.
  sh: cpu dma evt2irq migration.
  sh: sh7763rdp evt2irq migration.
  sh: sdk7780 evt2irq migration.
  sh: migor evt2irq migration.
  sh: landisk evt2irq migration.
  sh: kfr2r09 evt2irq migration.
  sh: ecovec24 evt2irq migration.
  sh: ap325rxa evt2irq migration.
  sh: urquell evt2irq migration.
  ...
2012-05-23 09:00:40 -07:00
Paul Mundt
5a1dc78a38 sh: Support thread fault code encoding.
This provides a simple interface modelled after sparc64/m32r to encode
the error code in the upper byte of thread_info for finer-grained
handling in the page fault path.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2012-05-14 14:57:28 +09:00
Thomas Gleixner
df9a7b9b5d sh-use-common-threadinfo-allocator
The core now has a threadinfo allocator which uses a kmemcache when
THREAD_SIZE < PAGE_SIZE.

Deal with the xstate cleanup in the new arch_release_task_struct()
function.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Mundt <lethal@linux-sh.org>
Link: http://lkml.kernel.org/r/20120505150142.189348931@linutronix.de
2012-05-08 14:08:45 +02:00
Thomas Gleixner
6c0a9fa62f fork: Remove the weak insanity
We error out when compiling with gcc4.1.[01] as it miscompiles
__weak. The workaround with magic defines is not longer
necessary. Make it __weak again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20120505150141.306358267@linutronix.de
2012-05-08 13:55:20 +02:00
Tejun Heo
d88e4cb671 freezer: remove now unused TIF_FREEZE
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arch@vger.kernel.org
2011-11-21 12:32:25 -08:00
Eric Dumazet
b6a84016bd mm: NUMA aware alloc_thread_info_node()
Add a node parameter to alloc_thread_info(), and change its name to
alloc_thread_info_node()

This change is needed to allow NUMA aware kthread_create_on_cpu()

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: David Howells <dhowells@redhat.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-22 17:44:01 -07:00
Andreas Dilger
0ddc9324b1 add descriptive comment for TIF_MEMDIE task flag declaration.
Signed-off-by: Andreas Dilger <adilger@dilger.ca>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2010-05-14 11:13:27 +02:00
Paul Mundt
0ea820cf9b sh: Move over to dynamically allocated FPU context.
This follows the x86 xstate changes and implements a task_xstate slab
cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.

This also tidies up and consolidates some of the SH-2A/SH-4 FPU
fragmentation. Now fpu state restorers are commonly defined, with the
init_fpu()/fpu_init() mess reworked to follow the x86 convention.
The fpu_init() register initialization has been replaced by xstate setup
followed by writing out to hardware via the standard restore path.

As init_fpu() now performs a slab allocation a secondary lighterweight
restorer is also introduced for the context switch.

In the future the DSP state will be rolled in here, too.

More work remains for math emulation and the SH-5 FPU, which presently
uses its own special (UP-only) interfaces.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13 12:51:40 +09:00
Paul Mundt
cbf6b1ba7a sh: Always provide thread_info allocators.
Presently the thread_info allocators are special cased, depending on
THREAD_SHIFT < PAGE_SHIFT. This provides a sensible definition for them
regardless of configuration, in preparation for extended CPU state.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-12 19:01:11 +09:00
Stuart Menefy
d3ea9fa0a5 sh: Minor optimisations to FPU handling
A number of small optimisations to FPU handling, in particular:

 - move the task USEDFPU flag from the thread_info flags field (which
   is accessed asynchronously to the thread) to a new status field,
   which is only accessed by the thread itself. This allows locking to
   be removed in most cases, or can be reduced to a preempt_lock().
   This mimics the i386 behaviour.

 - move the modification of regs->sr and thread_info->status flags out
   of save_fpu() to __unlazy_fpu(). This gives the compiler a better
   chance to optimise things, as well as making save_fpu() symmetrical
   with restore_fpu() and init_fpu().

 - implement prepare_to_copy(), so that when creating a thread, we can
   unlazy the FPU prior to copying the thread data structures.

Also make sure that the FPU is disabled while in the kernel, in
particular while booting, and for newly created kernel threads,

In a very artificial benchmark, the execution time for 2500000
context switches was reduced from 50 to 45 seconds.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-24 17:45:38 +09:00
Paul Mundt
56bfc42f6c sh: TS_RESTORE_SIGMASK conversion.
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own
set_restore_sigmask() function.  This saves the costly SMP-safe set_bit
operation, which we do not need for the sigmask flag since TIF_SIGPENDING
always has to be set too.

Based on the x86 and powerpc change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 16:05:42 +09:00
Paul Mundt
a74f7e0410 sh: Wire up HAVE_SYSCALL_TRACEPOINTS.
This is necessary to get ftrace syscall tracing working again.. a fairly
trivial and mechanical change. The one benefit is that this can also be
enabled on sh64, despite not having its own ftrace port.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-16 14:30:34 +09:00
Paul Mundt
f686d8c11c Merge branches 'sh/ftrace' and 'sh/stable-updates' 2009-07-11 10:08:33 +09:00
Peter Zijlstra
c99e6efe1b sched: INIT_PREEMPT_COUNT
Pull the initial preempt_count value into a single
definition site.

Maintainers for: alpha, ia64 and m68k, please have a look,
your arch code is funny.

The header magic is a bit odd, but similar to the KERNEL_DS
one, CPP waits with expanding these macros until the
INIT_THREAD_INFO macro itself is expanded, which is in
arch/*/kernel/init_task.c where we've already included
sched.h so we're good.

Cc: tony.luck@intel.com
Cc: rth@twiddle.net
Cc: geert@linux-m68k.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-10 14:24:05 -07:00
Matt Fleming
c652d780c9 sh: Add ftrace syscall tracing support
Now that I've added TIF_SYSCALL_FTRACE the thread flags do not fit into
a single byte any more. Code testing them now needs to be aware of the
upper and lower bytes.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-06 20:16:33 +09:00
Paul Mundt
c15c5f8c2b sh: Support kernel stacks smaller than a page.
This follows the powerpc commit f6a616800e
'[POWERPC] Fix kernel stack allocation alignment'.

SH has traditionally forced the thread order to be relative to the page
size, so there were never any situations where the same bug was
triggered by slub. Regardless, the usage of > 8kB stacks for the larger
page sizes is overkill, so we switch to using slab allocations there,
as per the powerpc change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-20 20:21:33 +09:00
Paul Mundt
ab99c733ae sh: Make syscall tracer use tracehook notifiers, add TIF_NOTIFY_RESUME.
This follows the changes in commits:

7d6d637dac
4f72c4279e

on powerpc. Adding in TIF_NOTIFY_RESUME, and cleaning up the syscall
tracing to be more generic. This is an incremental step to turning
on tracehook, as well as unifying more of the ptrace and signal code
across the 32/64 split.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-02 04:39:33 +09:00
Paul Mundt
c4637d4751 sh: seccomp support.
This hooks up the seccomp thread flag and associated callback from the
syscall tracer.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-02 04:39:32 +09:00
Paul Mundt
cec3fd3e2a sh: Tidy up the _TIF work masks, and fix syscall trace bug on singlestep.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-02 04:39:32 +09:00
Paul Mundt
f15cbe6f1a sh: migrate to arch/sh/include/
This follows the sparc changes a439fe51a1.

Most of the moving about was done with Sam's directions at:

http://marc.info/?l=linux-sh&m=121724823706062&w=2

with subsequent hacking and fixups entirely my fault.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-29 08:09:44 +09:00