In hooking up the perf counter overflow interrupt to the experimental
deprecated-real-soon-now /proc/perf interface last night, I had to
revisit arch/mips/mips-boards/generic/time.c, and discovered that
when the 2.6.9-based SMTC prototype was merged with the more
recent tree, it was missed that arch/mips/kernel/time.c had changed
so that even in SMP kernels, timer_interrupt() calls
local_timer_interrupt(), so there is no longer a need to invoke it
directly from mips_timer_interrupt() in those cases where
timer_interrupt() has been called. So I got rid of that, and added the
invocation of perf_irq() if Cause.PCI is set, more-or-less following the
same logic as in the non-SMTC case, with the modifications that (a) a
runtime check for Release 2 isn't done, because it's redundant in SMTC),
and (b) we check for a clock interrupt regardless of the value returned
by the perf counter service - I don't understand why we'd want to control
that with perf_irq(), but maybe one of you knows the story. I also got
rid of the stupid warning about the unused variable when compiled for
SMTC (another artifact of the merge). The result hasn't been beaten to
death, but boots, seems stable, and supports extended precision event
counting.
Signed-off-by: Kevin D. Kissell <kevink@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a thread became runnable between need_resched() and the WAIT
instruction, switching to the thread will delay until a next interrupt.
Some CPUs can execute the WAIT instruction with interrupt disabled, so
we can get rid of this race on them (at least UP case).
Original Patch by Atsushi with fixing up for MIPS Technology's cores by
Ralf based on feedback from the RTL designers.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* export asm/sgidefs.h
* include asm/isadep.h only if in kernel
* do not export contents of asm/timex.h and asm/user.h
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
I've encountered a serious problem with PCI config space access on Au1x000
platforms with recent 2.6.x-kernel. With 2.4.31 the same hardware works fine.
So I was looking for the differences:
Symptoms:
- no PCI-device is seen on bootup though two or three cards are present
- lspci output is empty
- OR: lspci shows 20 times the same device
(- OR: in some slot-configurations it worked anyhow)
System(s):
1. platform with Au1500 and three PCI-devices (actually a mycable XXS1500
with backplane for three PCI-devices)
2. platform with Au1550 and two PCI-devices (custom board)
Debugging:
I digged down to the config_access() of the au1xxx-processors in
arch/mips/pci/ops-au1000.c and switched on DEBUG.
The code of config_access() seems to be almost the same as of the
2.4.x-kernel. But the "pci_cfg_vm->addr" returned by get_vm_area(0x2000, 0)
once on booting is different. That's of course not forbidden. But the
alignment seems to be wrong. In my case, I received:
2.4.31: pci_cfg_vm->addr = c0000000
2.6.18-rc5: pci_cfg_vm->addr = c0101000
To make it short: With 2.6.x it fails on the first config-access with:
"PCI ERR detected: status 83a00356".
Fixup:
My fix is now, to use the VM_IOREMAP-flag in the get_vm_area call. This flag
seems to be introduced in mm/vmalloc.c a long time ago (in 2.6.7-bk13, I
found in gitweb).
Now, the returned address is pci_cfg_vm->addr = c0104000 and everything works
fine.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Since lmo commit 323a380bf9e1a1679a774a2b053e3c1f2aa3f179 ("Simplify
dump_stack()") made prepare_frametrace() always inlined, using $2 (v0)
in __asm__ is not safe anymore. We can use $1 (at) instead. Also we
should use "dla" instead of "la" for 64-bit kernel.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
c-r4k.c and c-sb1.c use drop_mmu_context() to flush virtually tagged
I-caches, but this does not work for flushing other task's icache. This
is for example triggered by copy_to_user_page() called from ptrace(2).
Use indexed flush for such cases.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
XTC can only be set if VPA is clear, which it may not be. There is
also the possibility of a back to back c0 register access hazard to
take care of.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
dclz() expects its 64-bit argument being passed as a single register
but on 32-bit kernels it'll actually be in a register pair.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In entry.S resume_userspace ... jal do_notify_resume form a loop through
which the kernel will iterate as long as work is pending. If we
iterate through this loop more than once with no signal pending for at
least one but the last iteration we will take do the syscall restarting
multiple times resulting in a syscall return prior to the the syscall
instruction in userspace. This may happen when debugging a multithreaded
program.
Debugging and original fix by Maciej; extended to other ABIs by me.
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Recent 34Ks come out of reset with WP enabled on VPE 1 so we take an
immediate exception when starting the second VPE.
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch allows unwind_stack() to return ra for leaf function.
But it tries to detects cases where get_frame_info() wrongly
consider nested function as a leaf one.
It also pass 'unsinged long *sp' instead of 'unsigned long **sp'
as second parameter. The code looks cleaner.
Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Now get_frame_info() wants to detect move sp instruction first. It
assumes that the save ra in the stack instruction can't happen
before allocating frame size space into the stack.
Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Make dump_stack() code not depend on CONFIG_KALLSYMS.
It also make prepare_frametrace() always inlined to get
less false entries reported by show_raw_backtrace().
Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We usually use backtrace term for dumping a call tree during
debug. Therefore this patch renames show_frametrace() into
show_backtrace() and show_trace() into show_raw_backtrace().
It also uses the new function print_ip_sym().
Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Instead of dump all possible address in the stack, unwind the stack frame
based on prologue code analysis, as like as get_wchan() does. While the
code analysis might fail for some reason, there is a new kernel option
"raw_show_trace" to disable this feature.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Print call-trace in show_stack() (like on other archs). Also make
show_trace() static and simplify its argument list.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
IRQs disabling in flush_cache_4096 for cache purge. Under certain
workloads we would get an IRQ in the middle of a purge operation,
and the cachelines would remain in an inconsistent state, leading
to occasional stack corruption.
Signed-off-by: Takeo Takahashi <takahashi.takeo@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Set the SHM alignment at runtime, based off of probed cache desc.
Optimize get_unmapped_area() to only colour align shared mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements initial support for the vsyscall page on SH.
At the moment we leave it configurable due to having nommu
to support from the same code base. We hook it up for the
signal trampoline return at present, with more to be added
later, once uClibc catches up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We want to be able to use PAGE_SIZE all over the place,
this is the same approach adopted by other architectures..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_cache_mm() wraps in to flush_cache_all(), which is rather
excessive given that the number of PTEs within the specified context
are generally quite low. Optimize for walking the mm's VMA list and
selectively flushing the VMA ranges from the dcache. Invalidate the
icache only if a VMA sets VM_EXEC.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Figure out the cache desc entry_mask at runtime, and remove
hard-coded assumption about the cacheline size.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>