Annotate __switch_to() so that the function graph tracer does not try to
trace it. Use __notrace_funcgraph, as opposed to notrace, so that other
tracers can continue to trace __switch_to().
The reason that we don't want to trace __switch_to() with the function
graph tracer is because of how the return address stack in task_struct
is implemented. When we enter __switch_to we store the real return
address on prev's ret_stack. When we return from __switch_to() we've
patched the return address on the kernel stack to be
return_to_handler. Calling return_to_handler we do,
-> ftrace_return_to_handler()
-> ftrace_pop_return_ftrace()
Which tries to pop the real return address from current->ret_stack. The
problem being that we stored the return address on prev->ret_stack, but
current now points to next, and next->ret_stack doesn't contain the
correct return address (and is possibly even empty).
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (56 commits)
sh: Fix declaration of __kernel_sigreturn and __kernel_rt_sigreturn
sh: Enable soc-camera in ap325rxa/migor/se7724 defconfigs.
sh: remove stray markers.
sh: defconfig updates.
sh: pci: Initial PCI-Express support for SH7786 Urquell board.
sh: Generic HAVE_PERF_COUNTER support.
SH: convert migor to soc-camera as platform-device
SH: convert ap325rxa to soc-camera as platform-device
soc-camera: unify i2c camera device platform data
sh: add platform data for r8a66597-hcd in setup-sh7723
sh: add platform data for r8a66597-hcd in setup-sh7366
sh: x3proto: add platform data for r8a66597-hcd
sh: highlander: add platform data for r8a66597-hcd
sh: sh7785lcr: add platform data for r8a66597-hcd
sh: turn off irqs when disabling CMT/TMU timers
sh: use kzalloc() for cpg clocks
sh: unbreak WARN_ON()
sh: Use generic atomic64_t implementation.
sh: Revised clock function in highlander
sh: Update r7780mp defconfig
...
avr32, mn10300, parisc, s390, sh, xtensa:
They never set PT_DTRACE, but clear it after do_execve().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Cc: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <matthew@wil.cx>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Chris Zankel <chris@zankel.net>
Acked-by: Roland McGrath <roland@redhat.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb. Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (23 commits)
sh: sh7785lcr: Map whole PCI address space.
sh: Fix up DSP context save/restore.
sh: Fix up number of on-chip DMA channels on SH7091.
sh: update defconfigs.
sh: Kill off broken direct-mapped cache mode.
sh: Wire up ARCH_HAS_DEFAULT_IDLE for cpuidle.
sh: Add a command line option for disabling I/O trapping.
sh: Select ARCH_HIBERNATION_POSSIBLE.
sh: migor: Fix up CEU use flags.
input: migor_ts: add wakeup support
rtc: rtc-sh: use set_irq_wake()
input: sh_keysc: use enable/disable_irq_wake()
sh: intc: set_irq_wake() support
sh: intc: install enable, disable and shutdown callbacks
clocksource: sh_cmt: use remove_irq() and remove clockevent workaround
sh: ap325 and Migo-R use new sh_mobile_ceu_info flags
sh: Fix up -Wformat-security whining.
sh: ap325rxa: Add ov772x support, again.
sh: Sanitize asm/mmu.h for assembly use.
sh: Tidy up sh7786 pinmux table.
...
There were a number of issues with the DSP context save/restore code,
mostly left-over relics from when it was introduced on SH3-DSP with
little follow-up testing, resulting in things like task_pt_dspregs()
referencing incorrect state on the stack.
This follows the MIPS convention of tracking the DSP state in the
thread_struct and handling the state save/restore in switch_to() and
finish_arch_switch() respectively. The regset interface is also updated,
which allows us to finally be rid of task_pt_dspregs() and the special
cased task_pt_regs().
Signed-off-by: Michael Trimarchi <michael@evidence.eu.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This can use the same implementation as sh64, the generated assembly is
the same between the new and old version, so there is not much point in
leaving it open coded in inline assembly.
This is preparatory work for future consolidation of the _32/_64
variants.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Description snipped from Steven Rostedt's PPC patch:
When idle is called, interrupts are blocked, but the idle
function will still wake up on an interrupt. The problem is
that the interrupt disabled latency tracer will take this call
to idle as a latency.
This patch disables the latency tracing when going into idle.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a simple show_code() that is in turn plugged in to
show_regs() to provide minimal code dumping at the end of the trace.
Built on top of a simple instruction disassembler derived from the
binutils opcode table.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a few trace points across events that are deemed
interesting. This implements a number of trace points:
- The page fault handler / TLB miss
- IPC calls
- Kernel thread creation
The original LTTng patch had the slow-path instrumented, which
fails to account for the vast majority of events. In general
placing this in the fast-path is not a huge performance hit, as
we don't take page faults for kernel addresses.
The other bits of interest are some of the other trap handlers, as
well as the syscall entry/exit (which is better off being handled
through the tracehook API). Most of the other trap handlers are corner
cases where alternate means of notification exist, so there is little
value in placing extra trace points in these locations.
Based on top of the points provided both by the LTTng instrumentation
patch as well as the patch shipping in the ST-Linux tree, albeit in a
stripped down form.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch contains the following cleanups:
- make the following needlessly global code static:
- cf-enabler.c: cf_init()
- cpu/clock.c: __clk_enable()
- cpu/clock.c: __clk_disable()
- process_32.c: default_idle()
- time_32.c: struct clocksource_sh
- timers/timer-tmu.c: struct tmu_timer_ops
- remove the following unused functions (no CONFIG_BLK_DEV_FD on sh):
- process_{32,64}.c: disable_hlt()
- process_{32,64}.c: enable_hlt()
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:
scheduler switch to idle task
enable interrupts
Window starts here
----> interrupt happens (does not set NEED_RESCHED)
irq_exit() stops the tick
----> interrupt happens (does set NEED_RESCHED)
return from schedule()
cpu_idle(): preempt_disable();
Window ends here
The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.
The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.
Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.
cpu_idle()
{
preempt_disable();
while(1) {
tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
are in the idle loop
while (!need_resched())
halt();
tick_nohz_restart_sched_tick(); <- disables NOHZ mode
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}
In hindsight we should have done this forever, but ...
/me grabs a large brown paperbag.
Debugged-by: Jack Ren <jack.ren@marvell.com>,
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Presently with preempt enabled there's the possibility to be preempted
after the TIF_USEDFPU test and the register save, leading to bogus
state post-__switch_to(). Use an explicit preempt_disable()/enable()
pair around unlazy_fpu()/clear_fpu() to avoid this. Follows the x86
change.
Reported-by: Takuo Koguchi <takuo.koguchi.sw@hitachi.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements kernel-level atomic rollback built on top of gUSA,
as an alternative non-IRQ based atomicity method. This is generally
a faster method for platforms that are lacking the LL/SC pairs that
SH-4A and later use, and is only supportable on legacy cores.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>