2011-01-24 07:42:41 +00:00
|
|
|
/*
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 07:27:59 +00:00
|
|
|
* This file contains the power_save function for Power7 CPUs.
|
2011-01-24 07:42:41 +00:00
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/threads.h>
|
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/page.h>
|
|
|
|
#include <asm/cputable.h>
|
|
|
|
#include <asm/thread_info.h>
|
|
|
|
#include <asm/ppc_asm.h>
|
|
|
|
#include <asm/asm-offsets.h>
|
|
|
|
#include <asm/ppc-opcode.h>
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 07:27:59 +00:00
|
|
|
#include <asm/hw_irq.h>
|
2012-02-03 00:54:17 +00:00
|
|
|
#include <asm/kvm_book3s_asm.h>
|
2014-02-26 00:08:43 +00:00
|
|
|
#include <asm/opal.h>
|
2011-01-24 07:42:41 +00:00
|
|
|
|
|
|
|
#undef DEBUG
|
|
|
|
|
2014-02-26 00:08:25 +00:00
|
|
|
/* Idle state entry routines */
|
2011-01-24 07:42:41 +00:00
|
|
|
|
2014-02-26 00:08:25 +00:00
|
|
|
#define IDLE_STATE_ENTER_SEQ(IDLE_INST) \
|
|
|
|
/* Magic NAP/SLEEP/WINKLE mode enter sequence */ \
|
|
|
|
std r0,0(r1); \
|
|
|
|
ptesync; \
|
|
|
|
ld r0,0(r1); \
|
|
|
|
1: cmp cr0,r0,r0; \
|
|
|
|
bne 1b; \
|
|
|
|
IDLE_INST; \
|
|
|
|
b .
|
2011-01-24 07:42:41 +00:00
|
|
|
|
2014-02-26 00:08:25 +00:00
|
|
|
.text
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pass requested state in r3:
|
|
|
|
* 0 - nap
|
|
|
|
* 1 - sleep
|
2014-05-23 08:15:26 +00:00
|
|
|
*
|
|
|
|
* To check IRQ_HAPPENED in r4
|
|
|
|
* 0 - don't check
|
|
|
|
* 1 - check
|
2014-02-26 00:08:25 +00:00
|
|
|
*/
|
|
|
|
_GLOBAL(power7_powersave_common)
|
|
|
|
/* Use r3 to pass state nap/sleep/winkle */
|
2011-01-24 07:42:41 +00:00
|
|
|
/* NAP is a state loss, we create a regs frame on the
|
|
|
|
* stack, fill it up with the state we care about and
|
|
|
|
* stick a pointer to it in PACAR1. We really only
|
|
|
|
* need to save PC, some CR bits and the NV GPRs,
|
|
|
|
* but for now an interrupt frame will do.
|
|
|
|
*/
|
|
|
|
mflr r0
|
|
|
|
std r0,16(r1)
|
|
|
|
stdu r1,-INT_FRAME_SIZE(r1)
|
|
|
|
std r0,_LINK(r1)
|
|
|
|
std r0,_NIP(r1)
|
|
|
|
|
|
|
|
#ifndef CONFIG_SMP
|
|
|
|
/* Make sure FPU, VSX etc... are flushed as we may lose
|
|
|
|
* state when going to nap mode
|
|
|
|
*/
|
2014-02-04 05:04:35 +00:00
|
|
|
bl discard_lazy_cpu_state
|
2011-01-24 07:42:41 +00:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
|
|
|
|
/* Hard disable interrupts */
|
|
|
|
mfmsr r9
|
|
|
|
rldicl r9,r9,48,1
|
|
|
|
rotldi r9,r9,16
|
|
|
|
mtmsrd r9,1 /* hard-disable interrupts */
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 07:27:59 +00:00
|
|
|
|
|
|
|
/* Check if something happened while soft-disabled */
|
|
|
|
lbz r0,PACAIRQHAPPENED(r13)
|
powerpc/powernv: Don't call generic code on offline cpus
On PowerNV platforms, when a CPU is offline, we put it into nap mode.
It's possible that the CPU wakes up from nap mode while it is still
offline due to a stray IPI. A misdirected device interrupt could also
potentially cause it to wake up. In that circumstance, we need to clear
the interrupt so that the CPU can go back to nap mode.
In the past the clearing of the interrupt was accomplished by briefly
enabling interrupts and allowing the normal interrupt handling code
(do_IRQ() etc.) to handle the interrupt. This has the problem that
this code calls irq_enter() and irq_exit(), which call functions such
as account_system_vtime() which use RCU internally. Use of RCU is not
permitted on offline CPUs and will trigger errors if RCU checking is
enabled.
To avoid calling into any generic code which might use RCU, we adopt
a different method of clearing interrupts on offline CPUs. Since we
are on the PowerNV platform, we know that the system interrupt
controller is a XICS being driven directly (i.e. not via hcalls) by
the kernel. Hence this adds a new icp_native_flush_interrupt()
function to the native-mode XICS driver and arranges to call that
when an offline CPU is woken from nap. This new function reads the
interrupt from the XICS. If it is an IPI, it clears the IPI; if it
is a device interrupt, it prints a warning and disables the source.
Then it does the end-of-interrupt processing for the interrupt.
The other thing that briefly enabling interrupts did was to check and
clear the irq_happened flag in this CPU's PACA. Therefore, after
flushing the interrupt from the XICS, we also clear all bits except
the PACA_IRQ_HARD_DIS (interrupts are hard disabled) bit from the
irq_happened flag. The PACA_IRQ_HARD_DIS flag is set by power7_nap()
and is left set to indicate that interrupts are hard disabled. This
means we then have to ignore that flag in power7_nap(), which is
reasonable since it doesn't indicate that any interrupt event needs
servicing.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-09-02 04:23:16 +00:00
|
|
|
andi. r0,r0,~PACA_IRQ_HARD_DIS@l
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 07:27:59 +00:00
|
|
|
beq 1f
|
2014-05-23 08:15:26 +00:00
|
|
|
cmpwi cr0,r4,0
|
|
|
|
beq 1f
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 07:27:59 +00:00
|
|
|
addi r1,r1,INT_FRAME_SIZE
|
|
|
|
ld r0,16(r1)
|
|
|
|
mtlr r0
|
|
|
|
blr
|
|
|
|
|
|
|
|
1: /* We mark irqs hard disabled as this is the state we'll
|
|
|
|
* be in when returning and we need to tell arch_local_irq_restore()
|
|
|
|
* about it
|
|
|
|
*/
|
|
|
|
li r0,PACA_IRQ_HARD_DIS
|
|
|
|
stb r0,PACAIRQHAPPENED(r13)
|
|
|
|
|
|
|
|
/* We haven't lost state ... yet */
|
2011-01-24 07:42:41 +00:00
|
|
|
li r0,0
|
2011-12-05 19:47:26 +00:00
|
|
|
stb r0,PACA_NAPSTATELOST(r13)
|
2011-01-24 07:42:41 +00:00
|
|
|
|
|
|
|
/* Continue saving state */
|
|
|
|
SAVE_GPR(2, r1)
|
|
|
|
SAVE_NVGPRS(r1)
|
2014-02-26 00:08:25 +00:00
|
|
|
mfcr r4
|
|
|
|
std r4,_CCR(r1)
|
2011-01-24 07:42:41 +00:00
|
|
|
std r9,_MSR(r1)
|
|
|
|
std r1,PACAR1(r13)
|
|
|
|
|
powerpc/powernv: Switch off MMU before entering nap/sleep/rvwinkle mode
Currently, when going idle, we set the flag indicating that we are in
nap mode (paca->kvm_hstate.hwthread_state) and then execute the nap
(or sleep or rvwinkle) instruction, all with the MMU on. This is bad
for two reasons: (a) the architecture specifies that those instructions
must be executed with the MMU off, and in fact with only the SF, HV, ME
and possibly RI bits set, and (b) this introduces a race, because as
soon as we set the flag, another thread can switch the MMU to a guest
context. If the race is lost, this thread will typically start looping
on relocation-on ISIs at 0xc...4400.
This fixes it by setting the MSR as required by the architecture before
setting the flag or executing the nap/sleep/rvwinkle instruction.
Cc: stable@vger.kernel.org
[ shreyas@linux.vnet.ibm.com: Edited to handle LE ]
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-09 18:56:50 +00:00
|
|
|
/*
|
|
|
|
* Go to real mode to do the nap, as required by the architecture.
|
|
|
|
* Also, we need to be in real mode before setting hwthread_state,
|
|
|
|
* because as soon as we do that, another thread can switch
|
|
|
|
* the MMU context to the guest.
|
|
|
|
*/
|
|
|
|
LOAD_REG_IMMEDIATE(r5, MSR_IDLE)
|
|
|
|
li r6, MSR_RI
|
|
|
|
andc r6, r9, r6
|
|
|
|
LOAD_REG_ADDR(r7, power7_enter_nap_mode)
|
|
|
|
mtmsrd r6, 1 /* clear RI before setting SRR0/1 */
|
|
|
|
mtspr SPRN_SRR0, r7
|
|
|
|
mtspr SPRN_SRR1, r5
|
|
|
|
rfid
|
|
|
|
|
|
|
|
.globl power7_enter_nap_mode
|
|
|
|
power7_enter_nap_mode:
|
2013-10-07 16:47:52 +00:00
|
|
|
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
2012-02-03 00:54:17 +00:00
|
|
|
/* Tell KVM we're napping */
|
|
|
|
li r4,KVM_HWTHREAD_IN_NAP
|
|
|
|
stb r4,HSTATE_HWTHREAD_STATE(r13)
|
|
|
|
#endif
|
2014-02-26 00:08:25 +00:00
|
|
|
cmpwi cr0,r3,1
|
|
|
|
beq 2f
|
|
|
|
IDLE_STATE_ENTER_SEQ(PPC_NAP)
|
|
|
|
/* No return */
|
|
|
|
2: IDLE_STATE_ENTER_SEQ(PPC_SLEEP)
|
|
|
|
/* No return */
|
2012-02-03 00:54:17 +00:00
|
|
|
|
2014-02-26 00:08:25 +00:00
|
|
|
_GLOBAL(power7_idle)
|
|
|
|
/* Now check if user or arch enabled NAP mode */
|
|
|
|
LOAD_REG_ADDRBASE(r3,powersave_nap)
|
|
|
|
lwz r4,ADDROFF(powersave_nap)(r3)
|
|
|
|
cmpwi 0,r4,0
|
|
|
|
beqlr
|
2014-05-23 08:15:26 +00:00
|
|
|
li r3, 1
|
2014-02-26 00:08:25 +00:00
|
|
|
/* fall through */
|
|
|
|
|
|
|
|
_GLOBAL(power7_nap)
|
2014-05-23 08:15:26 +00:00
|
|
|
mr r4,r3
|
2014-02-26 00:08:25 +00:00
|
|
|
li r3,0
|
|
|
|
b power7_powersave_common
|
|
|
|
/* No return */
|
|
|
|
|
|
|
|
_GLOBAL(power7_sleep)
|
|
|
|
li r3,1
|
2014-07-02 03:49:35 +00:00
|
|
|
li r4,1
|
2014-02-26 00:08:25 +00:00
|
|
|
b power7_powersave_common
|
|
|
|
/* No return */
|
2011-01-24 07:42:41 +00:00
|
|
|
|
2014-07-31 12:47:52 +00:00
|
|
|
/*
|
|
|
|
* Make opal call in realmode. This is a generic function to be called
|
|
|
|
* from realmode from reset vector. It handles endianess.
|
|
|
|
*
|
|
|
|
* r13 - paca pointer
|
|
|
|
* r1 - stack pointer
|
|
|
|
* r3 - opal token
|
|
|
|
*/
|
|
|
|
opal_call_realmode:
|
|
|
|
mflr r12
|
|
|
|
std r12,_LINK(r1)
|
|
|
|
ld r2,PACATOC(r13)
|
|
|
|
/* Set opal return address */
|
|
|
|
LOAD_REG_ADDR(r0,return_from_opal_call)
|
|
|
|
mtlr r0
|
|
|
|
/* Handle endian-ness */
|
|
|
|
li r0,MSR_LE
|
|
|
|
mfmsr r12
|
|
|
|
andc r12,r12,r0
|
|
|
|
mtspr SPRN_HSRR1,r12
|
|
|
|
mr r0,r3 /* Move opal token to r0 */
|
|
|
|
LOAD_REG_ADDR(r11,opal)
|
|
|
|
ld r12,8(r11)
|
|
|
|
ld r2,0(r11)
|
|
|
|
mtspr SPRN_HSRR0,r12
|
|
|
|
hrfid
|
|
|
|
|
|
|
|
return_from_opal_call:
|
|
|
|
FIXUP_ENDIAN
|
|
|
|
ld r0,_LINK(r1)
|
|
|
|
mtlr r0
|
|
|
|
blr
|
|
|
|
|
2014-07-29 13:10:13 +00:00
|
|
|
#define CHECK_HMI_INTERRUPT \
|
|
|
|
mfspr r0,SPRN_SRR1; \
|
|
|
|
BEGIN_FTR_SECTION_NESTED(66); \
|
|
|
|
rlwinm r0,r0,45-31,0xf; /* extract wake reason field (P8) */ \
|
|
|
|
FTR_SECTION_ELSE_NESTED(66); \
|
|
|
|
rlwinm r0,r0,45-31,0xe; /* P7 wake reason field is 3 bits */ \
|
|
|
|
ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_ARCH_207S, 66); \
|
|
|
|
cmpwi r0,0xa; /* Hypervisor maintenance ? */ \
|
|
|
|
bne 20f; \
|
|
|
|
/* Invoke opal call to handle hmi */ \
|
|
|
|
ld r2,PACATOC(r13); \
|
|
|
|
ld r1,PACAR1(r13); \
|
|
|
|
std r3,ORIG_GPR3(r1); /* Save original r3 */ \
|
2014-07-31 12:47:52 +00:00
|
|
|
li r3,OPAL_HANDLE_HMI; /* Pass opal token argument*/ \
|
|
|
|
bl opal_call_realmode; \
|
2014-07-29 13:10:13 +00:00
|
|
|
ld r3,ORIG_GPR3(r1); /* Restore original r3 */ \
|
|
|
|
20: nop;
|
|
|
|
|
|
|
|
|
2014-02-26 00:08:43 +00:00
|
|
|
_GLOBAL(power7_wakeup_tb_loss)
|
|
|
|
ld r2,PACATOC(r13);
|
|
|
|
ld r1,PACAR1(r13)
|
|
|
|
|
2014-07-29 13:10:13 +00:00
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
CHECK_HMI_INTERRUPT
|
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
|
2014-02-26 00:08:43 +00:00
|
|
|
/* Time base re-sync */
|
2014-07-31 12:47:52 +00:00
|
|
|
li r3,OPAL_RESYNC_TIMEBASE
|
|
|
|
bl opal_call_realmode;
|
2014-02-26 00:08:43 +00:00
|
|
|
|
|
|
|
/* TODO: Check r3 for failure */
|
|
|
|
|
|
|
|
REST_NVGPRS(r1)
|
|
|
|
REST_GPR(2, r1)
|
|
|
|
ld r3,_CCR(r1)
|
|
|
|
ld r4,_MSR(r1)
|
|
|
|
ld r5,_NIP(r1)
|
|
|
|
addi r1,r1,INT_FRAME_SIZE
|
|
|
|
mtcr r3
|
|
|
|
mfspr r3,SPRN_SRR1 /* Return SRR1 */
|
|
|
|
mtspr SPRN_SRR1,r4
|
|
|
|
mtspr SPRN_SRR0,r5
|
|
|
|
rfid
|
|
|
|
|
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
When a secondary hardware thread has finished running a KVM guest, we
currently put that thread into nap mode using a nap instruction in
the KVM code. This changes the code so that instead of doing a nap
instruction directly, we instead cause the call to power7_nap() that
put the thread into nap mode to return. The reason for doing this is
to avoid having the KVM code having to know what low-power mode to
put the thread into.
In the case of a secondary thread used to run a KVM guest, the thread
will be offline from the point of view of the host kernel, and the
relevant power7_nap() call is the one in pnv_smp_cpu_disable().
In this case we don't want to clear pending IPIs in the offline loop
in that function, since that might cause us to miss the wakeup for
the next time the thread needs to run a guest. To tell whether or
not to clear the interrupt, we use the SRR1 value returned from
power7_nap(), and check if it indicates an external interrupt. We
arrange that the return from power7_nap() when we have finished running
a guest returns 0, so pending interrupts don't get flushed in that
case.
Note that it is important a secondary thread that has finished
executing in the guest, or that didn't have a guest to run, should
not return to power7_nap's caller while the kvm_hstate.hwthread_req
flag in the PACA is non-zero, because the return from power7_nap
will reenable the MMU, and the MMU might still be in guest context.
In this situation we spin at low priority in real mode waiting for
hwthread_req to become zero.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-03 03:48:40 +00:00
|
|
|
/*
|
|
|
|
* R3 here contains the value that will be returned to the caller
|
|
|
|
* of power7_nap.
|
|
|
|
*/
|
2011-01-24 07:42:41 +00:00
|
|
|
_GLOBAL(power7_wakeup_loss)
|
|
|
|
ld r1,PACAR1(r13)
|
2014-07-29 13:10:13 +00:00
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
CHECK_HMI_INTERRUPT
|
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
|
2011-01-24 07:42:41 +00:00
|
|
|
REST_NVGPRS(r1)
|
|
|
|
REST_GPR(2, r1)
|
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
When a secondary hardware thread has finished running a KVM guest, we
currently put that thread into nap mode using a nap instruction in
the KVM code. This changes the code so that instead of doing a nap
instruction directly, we instead cause the call to power7_nap() that
put the thread into nap mode to return. The reason for doing this is
to avoid having the KVM code having to know what low-power mode to
put the thread into.
In the case of a secondary thread used to run a KVM guest, the thread
will be offline from the point of view of the host kernel, and the
relevant power7_nap() call is the one in pnv_smp_cpu_disable().
In this case we don't want to clear pending IPIs in the offline loop
in that function, since that might cause us to miss the wakeup for
the next time the thread needs to run a guest. To tell whether or
not to clear the interrupt, we use the SRR1 value returned from
power7_nap(), and check if it indicates an external interrupt. We
arrange that the return from power7_nap() when we have finished running
a guest returns 0, so pending interrupts don't get flushed in that
case.
Note that it is important a secondary thread that has finished
executing in the guest, or that didn't have a guest to run, should
not return to power7_nap's caller while the kvm_hstate.hwthread_req
flag in the PACA is non-zero, because the return from power7_nap
will reenable the MMU, and the MMU might still be in guest context.
In this situation we spin at low priority in real mode waiting for
hwthread_req to become zero.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-03 03:48:40 +00:00
|
|
|
ld r6,_CCR(r1)
|
2011-01-24 07:42:41 +00:00
|
|
|
ld r4,_MSR(r1)
|
|
|
|
ld r5,_NIP(r1)
|
|
|
|
addi r1,r1,INT_FRAME_SIZE
|
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
When a secondary hardware thread has finished running a KVM guest, we
currently put that thread into nap mode using a nap instruction in
the KVM code. This changes the code so that instead of doing a nap
instruction directly, we instead cause the call to power7_nap() that
put the thread into nap mode to return. The reason for doing this is
to avoid having the KVM code having to know what low-power mode to
put the thread into.
In the case of a secondary thread used to run a KVM guest, the thread
will be offline from the point of view of the host kernel, and the
relevant power7_nap() call is the one in pnv_smp_cpu_disable().
In this case we don't want to clear pending IPIs in the offline loop
in that function, since that might cause us to miss the wakeup for
the next time the thread needs to run a guest. To tell whether or
not to clear the interrupt, we use the SRR1 value returned from
power7_nap(), and check if it indicates an external interrupt. We
arrange that the return from power7_nap() when we have finished running
a guest returns 0, so pending interrupts don't get flushed in that
case.
Note that it is important a secondary thread that has finished
executing in the guest, or that didn't have a guest to run, should
not return to power7_nap's caller while the kvm_hstate.hwthread_req
flag in the PACA is non-zero, because the return from power7_nap
will reenable the MMU, and the MMU might still be in guest context.
In this situation we spin at low priority in real mode waiting for
hwthread_req to become zero.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-03 03:48:40 +00:00
|
|
|
mtcr r6
|
2011-01-24 07:42:41 +00:00
|
|
|
mtspr SPRN_SRR1,r4
|
|
|
|
mtspr SPRN_SRR0,r5
|
|
|
|
rfid
|
|
|
|
|
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
When a secondary hardware thread has finished running a KVM guest, we
currently put that thread into nap mode using a nap instruction in
the KVM code. This changes the code so that instead of doing a nap
instruction directly, we instead cause the call to power7_nap() that
put the thread into nap mode to return. The reason for doing this is
to avoid having the KVM code having to know what low-power mode to
put the thread into.
In the case of a secondary thread used to run a KVM guest, the thread
will be offline from the point of view of the host kernel, and the
relevant power7_nap() call is the one in pnv_smp_cpu_disable().
In this case we don't want to clear pending IPIs in the offline loop
in that function, since that might cause us to miss the wakeup for
the next time the thread needs to run a guest. To tell whether or
not to clear the interrupt, we use the SRR1 value returned from
power7_nap(), and check if it indicates an external interrupt. We
arrange that the return from power7_nap() when we have finished running
a guest returns 0, so pending interrupts don't get flushed in that
case.
Note that it is important a secondary thread that has finished
executing in the guest, or that didn't have a guest to run, should
not return to power7_nap's caller while the kvm_hstate.hwthread_req
flag in the PACA is non-zero, because the return from power7_nap
will reenable the MMU, and the MMU might still be in guest context.
In this situation we spin at low priority in real mode waiting for
hwthread_req to become zero.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-03 03:48:40 +00:00
|
|
|
/*
|
|
|
|
* R3 here contains the value that will be returned to the caller
|
|
|
|
* of power7_nap.
|
|
|
|
*/
|
2011-01-24 07:42:41 +00:00
|
|
|
_GLOBAL(power7_wakeup_noloss)
|
2011-12-05 19:47:26 +00:00
|
|
|
lbz r0,PACA_NAPSTATELOST(r13)
|
|
|
|
cmpwi r0,0
|
2014-02-04 05:04:35 +00:00
|
|
|
bne power7_wakeup_loss
|
2014-07-29 13:10:13 +00:00
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
CHECK_HMI_INTERRUPT
|
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
|
2011-01-24 07:42:41 +00:00
|
|
|
ld r1,PACAR1(r13)
|
|
|
|
ld r4,_MSR(r1)
|
|
|
|
ld r5,_NIP(r1)
|
|
|
|
addi r1,r1,INT_FRAME_SIZE
|
|
|
|
mtspr SPRN_SRR1,r4
|
|
|
|
mtspr SPRN_SRR0,r5
|
|
|
|
rfid
|