Since the system reset interrupt began to use its own stack, and
machine check interrupts have done so for some time, r1 can be
changed without clearing MSR[RI], provided no other interrupts
(including SLB misses) are taken.
MSR[RI] does have to be cleared when using SCRATCH0, however.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Although the 0x1500 interrupt only applies to bare metal, it is better
to just use the standard macro for scratch save.
Runtime code path remains unchanged (due to instruction patching).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Restore all SPRs and CR up-front, these are longer latency
instructions. Move register restore around to maximise pairs of
adjacent loads (e.g., restore r0 next to r1).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Duplicate the hmi windup code for both cases, rather than to put a
special case branch in the middle of it. Remove unused label. This
helps with later code consolidation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Move in_mce decrement earlier before registers are restored (but
still after RI=0). This helps with later consolidation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
All supported 64s CPUs support mtmsrd L=1 instruction, so a cleanup
can be made in sreset and mce handlers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Move SPR reads ahead of writes. Real mode entry that is not a KVM
guest is rare these days, but bad practice propagates.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
syscall / hcall entry unnecessarily differs between KVM and non-KVM
builds. Move the SMT priority instruction to the same location
(after INTERRUPT_TO_KERNEL).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
No generated code change. Final vmlinux is changed only due to change
in bug table line numbers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Generally, macros that result in instructions being expanded are
indented by a tab, and those that don't have no indent. Fix the
obvious cases that go contrary to style.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
These are only called in one place each.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
After the previous cleanup, it becomes possible to consolidate some
common code outside the runtime alternate patching. Also remove
unused labels.
This results in some code change, but unchanged runtime instruction
sequence.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Many of these macros just specify 1-4 lines which are only called a
few times each at most, and often just once. Remove this indirection.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
More cases of code insertion via macros that does not add a great
deal. All the additions have to be specified in the macro arguments,
so they can just as well go after the macro.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The aim is to reduce the amount of indirection it takes to get through
the exception handler macros, particularly where it provides little
code sharing.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Move the KVM trap HSRR bit into the KVM handler, which can be
conditionally applied when hsrr parameter is set.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Conditionally expand the skip case if it is specified.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Conditionally expand the soft-masking test if a mask is passed in.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Rather than pass in the soft-masking and KVM tests via macro that is
passed to another macro to expand it, switch to usig gas macros and
conditionally expand the soft-masking and KVM tests.
The system reset with its idle test is open coded as it is a one-off.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The sreset handler KVM test theoretically should not depend on P7.
In practice KVM now only supports P7 and up so no real bug fix, but
this change is made now so the quirk is not propagated through
cleanup patches.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
- Re-name the macros to _REAL and _VIRT suffixes rather than no and
_RELON suffix.
- Move the macro definitions together in the file.
- Move RELOCATABLE ifdef inside the _VIRT macro.
Further consolidation between variants does not buy much here.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Switch to a gas macro that conditionally expands the RI clearing
instruction.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Replace all instances of this with gas macros that test the hsrr
parameter and use the appropriate register names / labels.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Remove extraneous 2nd check for 0xea0 in SOFTEN_TEST]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Remove SOFTEN_VALUE_0x980, it's been unused since commit
dabe859ec6 ("powerpc: Give hypervisor decrementer interrupts their
own handler") (Sep 2012).
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
By convention, all lines should be separated by a semicolons. Last line
should have neither semicolon or line wrap.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
These two function have never been used anywhere in the kernel tree
since they were added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
None of these routines were ever used anywhere in the kernel tree
since they were added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
These have been unused anywhere in the kernel tree ever since they've
been added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This function has never been used anywhere in the kernel tree since it
was added to the tree. We also now have proper PCIe P2P APIs in the core
kernel, and any new P2P support should be using those.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit ed49f7fd64 ("powerpc/xmon: Disable tracing when entering
xmon") added code to disable recording trace entries while in xmon. The
commit introduced a variable 'tracing_enabled' to record if tracing was
enabled on xmon entry, and used this to conditionally enable tracing
during exit from xmon.
However, we are not checking the value of 'fromipi' variable in
xmon_core() when setting 'tracing_enabled'. Due to this, when secondary
cpus enter xmon, they will see tracing as being disabled already and
tracing won't be re-enabled on exit. Fix the same.
Fixes: ed49f7fd64 ("powerpc/xmon: Disable tracing when entering xmon")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The guard macro __PPC_ASM_H in the header ppc_asm.h
doesn't match the #ifndef macro _PPC_ASM_H. The patch
makes them the same.
Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The powerpc's flush_cache_vmap() is defined as a macro and never use
both of its arguments, so it will generate a compilation warning,
lib/ioremap.c: In function 'ioremap_page_range':
lib/ioremap.c:203:16: warning: variable 'start' set but not used
[-Wunused-but-set-variable]
Fix it by making it an inline function.
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The opening comment mark "/**" is reserved for kernel-doc comments, so
it will generate a warning with "make W=1".
arch/powerpc/kernel/eeh_cache.c:37: warning: cannot understand function
prototype: 'struct pci_io_addr_range
Since this is not a kernel-doc for the struct below, but rather an
overview of this source eeh_cache.c, just use the free-form comments
kernel-doc syntax instead.
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Selects HAVE_C_RECORDMCOUNT to use the C version of the recordmcount
intead of the old Perl Version of recordmcount.
This should improve build time. It also seems like the old Perl Version
misses some calls to _mcount that the C version finds.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
An impending change to enable HAVE_C_RECORDMCOUNT on powerpc leads to
warnings such as the following:
# modprobe kprobe_example
ftrace-powerpc: Not expected bl: opcode is 3c4c0001
WARNING: CPU: 0 PID: 227 at kernel/trace/ftrace.c:2001 ftrace_bug+0x90/0x318
Modules linked in:
CPU: 0 PID: 227 Comm: modprobe Not tainted 5.2.0-rc6-00678-g1c329100b942 #2
NIP: c000000000264318 LR: c00000000025d694 CTR: c000000000f5cd30
REGS: c000000001f2b7b0 TRAP: 0700 Not tainted (5.2.0-rc6-00678-g1c329100b942)
MSR: 900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 28228222 XER: 00000000
CFAR: c0000000002642fc IRQMASK: 0
<snip>
NIP [c000000000264318] ftrace_bug+0x90/0x318
LR [c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
Call Trace:
[c000000001f2ba40] [0000000000000004] 0x4 (unreliable)
[c000000001f2bad0] [c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
[c000000001f2bb90] [c00000000020ff10] load_module+0x25b0/0x30c0
[c000000001f2bd00] [c000000000210cb0] sys_finit_module+0xc0/0x130
[c000000001f2be20] [c00000000000bda4] system_call+0x5c/0x70
Instruction dump:
419e0018 2f83ffff 419e00bc 2f83ffea 409e00cc 4800001c 0fe00000 3c62ff96
39000001 39400000 386386d0 480000c4 <0fe00000> 3ce20003 39000001 3c62ff96
---[ end trace 4c438d5cebf78381 ]---
ftrace failed to modify
[<c0080000012a0008>] 0xc0080000012a0008
actual: 01:00:4c:3c
Initializing ftrace call sites
ftrace record flags: 2000000
(0)
expected tramp: c00000000006af4c
Looking at the relocation records in __mcount_loc shows a few spurious
entries:
RELOCATION RECORDS FOR [__mcount_loc]:
OFFSET TYPE VALUE
0000000000000000 R_PPC64_ADDR64 .text.unlikely+0x0000000000000008
0000000000000008 R_PPC64_ADDR64 .text.unlikely+0x0000000000000014
0000000000000010 R_PPC64_ADDR64 .text.unlikely+0x0000000000000060
0000000000000018 R_PPC64_ADDR64 .text.unlikely+0x00000000000000b4
0000000000000020 R_PPC64_ADDR64 .init.text+0x0000000000000008
0000000000000028 R_PPC64_ADDR64 .init.text+0x0000000000000014
The first entry in each section is incorrect. Looking at the
relocation records, the spurious entries correspond to the
R_PPC64_ENTRY records:
RELOCATION RECORDS FOR [.text.unlikely]:
OFFSET TYPE VALUE
0000000000000000 R_PPC64_REL64 .TOC.-0x0000000000000008
0000000000000008 R_PPC64_ENTRY *ABS*
0000000000000014 R_PPC64_REL24 _mcount
<snip>
The problem is that we are not validating the return value from
get_mcountsym() in sift_rel_mcount(). With this entry, mcountsym is 0,
but Elf_r_sym(relp) also ends up being 0. Fix this by ensuring
mcountsym is valid before processing the entry.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>