Patch queue for ppc - 2014-08-01
Highlights in this release include:
 
   - BookE: Rework instruction fetch, not racy anymore now
   - BookE HV: Fix ONE_REG accessors for some in-hardware registers
   - Book3S: Good number of LE host fixes, enable HV on LE
   - Book3S: Some misc bug fixes
   - Book3S HV: Add in-guest debug support
   - Book3S HV: Preload cache lines on context switch
   - Remove 440 support
 
 Alexander Graf (31):
       KVM: PPC: Book3s PR: Disable AIL mode with OPAL
       KVM: PPC: Book3s HV: Fix tlbie compile error
       KVM: PPC: Book3S PR: Handle hyp doorbell exits
       KVM: PPC: Book3S PR: Fix ABIv2 on LE
       KVM: PPC: Book3S PR: Fix sparse endian checks
       PPC: Add asm helpers for BE 32bit load/store
       KVM: PPC: Book3S HV: Make HTAB code LE host aware
       KVM: PPC: Book3S HV: Access guest VPA in BE
       KVM: PPC: Book3S HV: Access host lppaca and shadow slb in BE
       KVM: PPC: Book3S HV: Access XICS in BE
       KVM: PPC: Book3S HV: Fix ABIv2 on LE
       KVM: PPC: Book3S HV: Enable for little endian hosts
       KVM: PPC: Book3S: Move vcore definition to end of kvm_arch struct
       KVM: PPC: Deflect page write faults properly in kvmppc_st
       KVM: PPC: Book3S: Stop PTE lookup on write errors
       KVM: PPC: Book3S: Add hack for split real mode
       KVM: PPC: Book3S: Make magic page properly 4k mappable
       KVM: PPC: Remove 440 support
       KVM: Rename and add argument to check_extension
       KVM: Allow KVM_CHECK_EXTENSION on the vm fd
       KVM: PPC: Book3S: Provide different CAPs based on HV or PR mode
       KVM: PPC: Implement kvmppc_xlate for all targets
       KVM: PPC: Move kvmppc_ld/st to common code
       KVM: PPC: Remove kvmppc_bad_hva()
       KVM: PPC: Use kvm_read_guest in kvmppc_ld
       KVM: PPC: Handle magic page in kvmppc_ld/st
       KVM: PPC: Separate loadstore emulation from priv emulation
       KVM: PPC: Expose helper functions for data/inst faults
       KVM: PPC: Remove DCR handling
       KVM: PPC: HV: Remove generic instruction emulation
       KVM: PPC: PR: Handle FSCR feature deselects
 
 Alexey Kardashevskiy (1):
       KVM: PPC: Book3S: Fix LPCR one_reg interface
 
 Aneesh Kumar K.V (4):
       KVM: PPC: BOOK3S: PR: Fix PURR and SPURR emulation
       KVM: PPC: BOOK3S: PR: Emulate virtual timebase register
       KVM: PPC: BOOK3S: PR: Emulate instruction counter
       KVM: PPC: BOOK3S: HV: Update compute_tlbie_rb to handle 16MB base page
 
 Anton Blanchard (2):
       KVM: PPC: Book3S HV: Fix ABIv2 indirect branch issue
       KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
 
 Bharat Bhushan (10):
       kvm: ppc: bookehv: Added wrapper macros for shadow registers
       kvm: ppc: booke: Use the shared struct helpers of SRR0 and SRR1
       kvm: ppc: booke: Use the shared struct helpers of SPRN_DEAR
       kvm: ppc: booke: Add shared struct helpers of SPRN_ESR
       kvm: ppc: booke: Use the shared struct helpers for SPRN_SPRG0-7
       kvm: ppc: Add SPRN_EPR get helper function
       kvm: ppc: bookehv: Save restore SPRN_SPRG9 on guest entry exit
       KVM: PPC: Booke-hv: Add one reg interface for SPRG9
       KVM: PPC: Remove comment saying SPRG1 is used for vcpu pointer
       KVM: PPC: BOOKEHV: rename e500hv_spr to bookehv_spr
 
 Michael Neuling (1):
       KVM: PPC: Book3S HV: Add H_SET_MODE hcall handling
 
 Mihai Caraman (8):
       KVM: PPC: e500mc: Enhance tlb invalidation condition on vcpu schedule
       KVM: PPC: e500: Fix default tlb for victim hint
       KVM: PPC: e500: Emulate power management control SPR
       KVM: PPC: e500mc: Revert "add load inst fixup"
       KVM: PPC: Book3e: Add TLBSEL/TSIZE defines for MAS0/1
       KVM: PPC: Book3s: Remove kvmppc_read_inst() function
       KVM: PPC: Allow kvmppc_get_last_inst() to fail
       KVM: PPC: Bookehv: Get vcpu's last instruction for emulation
 
 Paul Mackerras (4):
       KVM: PPC: Book3S: Controls for in-kernel sPAPR hypercall handling
       KVM: PPC: Book3S: Allow only implemented hcalls to be enabled or disabled
       KVM: PPC: Book3S PR: Take SRCU read lock around RTAS kvm_read_guest() call
       KVM: PPC: Book3S: Make kvmppc_ld return a more accurate error indication
 
 Stewart Smith (2):
       Split out struct kvmppc_vcore creation to separate function
       Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQIcBAABAgAGBQJT21skAAoJECszeR4D/txgeFEP/AzJopN7s//W33CfyBqURHXp
 XALCyAw+S67gtcaTZbxomcG1xuT8Lj9WEw28iz3rCtAnJwIxsY63xrI1nXMzTaI2
 p1rC0ai5Qy+nlEbd6L78spZy/Nzh8DFYGWx78iUSO1mYD8xywJwtoiBA539pwp8j
 8N+mgn61Hwhv31bKtsZlmzXymVr/jbTp5LVuxsBLJwD2lgT49g+4uBnX2cG/iXkg
 Rzbh7LxoNNXrSPI8sYmTWu/81aeXteeX70ja6DHuV5dWLNTuAXJrh5EUfeAZqBrV
 aYcLWUYmIyB87txNmt6ZGVar2p3jr2Xhb9mKx+EN4dbehblanLc1PUqlHd0q3dKc
 Nt60ByqpZn+qDAK86dShSZLEe+GT3lovvE76CqVXD4Er+OUEkc9JoxhN1cof/Gb0
 o6uwZ2isXHRdGoZx5vb4s3UTOlwZGtoL/CyY/HD/ujYDSURkCGbxLj3kkecSY8ut
 QdDAWsC15BwsHtKLr5Zwjp2w+0eGq2QJgfvO0zqWFiz9k33SCBCUpwluFeqh27Hi
 aR5Wir3j+MIw9G8XlYlDJWYfi0h/SZ4G7hh7jSu26NBNBzQsDa8ow/cLzdMhdUwH
 OYSaeqVk5wiRb9to1uq1NQWPA0uRAx3BSjjvr9MCGRqmvn+FV5nj637YWUT+53Hi
 aSvg/U2npghLPPG2cihu
 =JuLr
 -----END PGP SIGNATURE-----
Merge tag 'signed-kvm-ppc-next' of git://github.com/agraf/linux-2.6 into kvm
Patch queue for ppc - 2014-08-01
Highlights in this release include:
  - BookE: Rework instruction fetch, not racy anymore now
  - BookE HV: Fix ONE_REG accessors for some in-hardware registers
  - Book3S: Good number of LE host fixes, enable HV on LE
  - Book3S: Some misc bug fixes
  - Book3S HV: Add in-guest debug support
  - Book3S HV: Preload cache lines on context switch
  - Remove 440 support
Alexander Graf (31):
      KVM: PPC: Book3s PR: Disable AIL mode with OPAL
      KVM: PPC: Book3s HV: Fix tlbie compile error
      KVM: PPC: Book3S PR: Handle hyp doorbell exits
      KVM: PPC: Book3S PR: Fix ABIv2 on LE
      KVM: PPC: Book3S PR: Fix sparse endian checks
      PPC: Add asm helpers for BE 32bit load/store
      KVM: PPC: Book3S HV: Make HTAB code LE host aware
      KVM: PPC: Book3S HV: Access guest VPA in BE
      KVM: PPC: Book3S HV: Access host lppaca and shadow slb in BE
      KVM: PPC: Book3S HV: Access XICS in BE
      KVM: PPC: Book3S HV: Fix ABIv2 on LE
      KVM: PPC: Book3S HV: Enable for little endian hosts
      KVM: PPC: Book3S: Move vcore definition to end of kvm_arch struct
      KVM: PPC: Deflect page write faults properly in kvmppc_st
      KVM: PPC: Book3S: Stop PTE lookup on write errors
      KVM: PPC: Book3S: Add hack for split real mode
      KVM: PPC: Book3S: Make magic page properly 4k mappable
      KVM: PPC: Remove 440 support
      KVM: Rename and add argument to check_extension
      KVM: Allow KVM_CHECK_EXTENSION on the vm fd
      KVM: PPC: Book3S: Provide different CAPs based on HV or PR mode
      KVM: PPC: Implement kvmppc_xlate for all targets
      KVM: PPC: Move kvmppc_ld/st to common code
      KVM: PPC: Remove kvmppc_bad_hva()
      KVM: PPC: Use kvm_read_guest in kvmppc_ld
      KVM: PPC: Handle magic page in kvmppc_ld/st
      KVM: PPC: Separate loadstore emulation from priv emulation
      KVM: PPC: Expose helper functions for data/inst faults
      KVM: PPC: Remove DCR handling
      KVM: PPC: HV: Remove generic instruction emulation
      KVM: PPC: PR: Handle FSCR feature deselects
Alexey Kardashevskiy (1):
      KVM: PPC: Book3S: Fix LPCR one_reg interface
Aneesh Kumar K.V (4):
      KVM: PPC: BOOK3S: PR: Fix PURR and SPURR emulation
      KVM: PPC: BOOK3S: PR: Emulate virtual timebase register
      KVM: PPC: BOOK3S: PR: Emulate instruction counter
      KVM: PPC: BOOK3S: HV: Update compute_tlbie_rb to handle 16MB base page
Anton Blanchard (2):
      KVM: PPC: Book3S HV: Fix ABIv2 indirect branch issue
      KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
Bharat Bhushan (10):
      kvm: ppc: bookehv: Added wrapper macros for shadow registers
      kvm: ppc: booke: Use the shared struct helpers of SRR0 and SRR1
      kvm: ppc: booke: Use the shared struct helpers of SPRN_DEAR
      kvm: ppc: booke: Add shared struct helpers of SPRN_ESR
      kvm: ppc: booke: Use the shared struct helpers for SPRN_SPRG0-7
      kvm: ppc: Add SPRN_EPR get helper function
      kvm: ppc: bookehv: Save restore SPRN_SPRG9 on guest entry exit
      KVM: PPC: Booke-hv: Add one reg interface for SPRG9
      KVM: PPC: Remove comment saying SPRG1 is used for vcpu pointer
      KVM: PPC: BOOKEHV: rename e500hv_spr to bookehv_spr
Michael Neuling (1):
      KVM: PPC: Book3S HV: Add H_SET_MODE hcall handling
Mihai Caraman (8):
      KVM: PPC: e500mc: Enhance tlb invalidation condition on vcpu schedule
      KVM: PPC: e500: Fix default tlb for victim hint
      KVM: PPC: e500: Emulate power management control SPR
      KVM: PPC: e500mc: Revert "add load inst fixup"
      KVM: PPC: Book3e: Add TLBSEL/TSIZE defines for MAS0/1
      KVM: PPC: Book3s: Remove kvmppc_read_inst() function
      KVM: PPC: Allow kvmppc_get_last_inst() to fail
      KVM: PPC: Bookehv: Get vcpu's last instruction for emulation
Paul Mackerras (4):
      KVM: PPC: Book3S: Controls for in-kernel sPAPR hypercall handling
      KVM: PPC: Book3S: Allow only implemented hcalls to be enabled or disabled
      KVM: PPC: Book3S PR: Take SRCU read lock around RTAS kvm_read_guest() call
      KVM: PPC: Book3S: Make kvmppc_ld return a more accurate error indication
Stewart Smith (2):
      Split out struct kvmppc_vcore creation to separate function
      Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8
Conflicts:
	Documentation/virtual/kvm/api.txt
			
			
This commit is contained in:
		
						commit
						cc568ead3c
					
				| @ -17,8 +17,6 @@ firmware-assisted-dump.txt | ||||
| 	- Documentation on the firmware assisted dump mechanism "fadump". | ||||
| hvcs.txt | ||||
| 	- IBM "Hypervisor Virtual Console Server" Installation Guide | ||||
| kvm_440.txt | ||||
| 	- Various notes on the implementation of KVM for PowerPC 440. | ||||
| mpc52xx.txt | ||||
| 	- Linux 2.6.x on MPC52xx family | ||||
| pmu-ebb.txt | ||||
|  | ||||
| @ -1,41 +0,0 @@ | ||||
| Hollis Blanchard <hollisb@us.ibm.com> | ||||
| 15 Apr 2008 | ||||
| 
 | ||||
| Various notes on the implementation of KVM for PowerPC 440: | ||||
| 
 | ||||
| To enforce isolation, host userspace, guest kernel, and guest userspace all | ||||
| run at user privilege level. Only the host kernel runs in supervisor mode. | ||||
| Executing privileged instructions in the guest traps into KVM (in the host | ||||
| kernel), where we decode and emulate them. Through this technique, unmodified | ||||
| 440 Linux kernels can be run (slowly) as guests. Future performance work will | ||||
| focus on reducing the overhead and frequency of these traps. | ||||
| 
 | ||||
| The usual code flow is started from userspace invoking an "run" ioctl, which | ||||
| causes KVM to switch into guest context. We use IVPR to hijack the host | ||||
| interrupt vectors while running the guest, which allows us to direct all | ||||
| interrupts to kvmppc_handle_interrupt(). At this point, we could either | ||||
| - handle the interrupt completely (e.g. emulate "mtspr SPRG0"), or | ||||
| - let the host interrupt handler run (e.g. when the decrementer fires), or | ||||
| - return to host userspace (e.g. when the guest performs device MMIO) | ||||
| 
 | ||||
| Address spaces: We take advantage of the fact that Linux doesn't use the AS=1 | ||||
| address space (in host or guest), which gives us virtual address space to use | ||||
| for guest mappings. While the guest is running, the host kernel remains mapped | ||||
| in AS=0, but the guest can only use AS=1 mappings. | ||||
| 
 | ||||
| TLB entries: The TLB entries covering the host linear mapping remain | ||||
| present while running the guest. This reduces the overhead of lightweight | ||||
| exits, which are handled by KVM running in the host kernel. We keep three | ||||
| copies of the TLB: | ||||
|  - guest TLB: contents of the TLB as the guest sees it | ||||
|  - shadow TLB: the TLB that is actually in hardware while guest is running | ||||
|  - host TLB: to restore TLB state when context switching guest -> host | ||||
| When a TLB miss occurs because a mapping was not present in the shadow TLB, | ||||
| but was present in the guest TLB, KVM handles the fault without invoking the | ||||
| guest. Large guest pages are backed by multiple 4KB shadow pages through this | ||||
| mechanism. | ||||
| 
 | ||||
| IO: MMIO and DCR accesses are emulated by userspace. We use virtio for network | ||||
| and block IO, so those drivers must be enabled in the guest. It's possible | ||||
| that some qemu device emulation (e.g. e1000 or rtl8139) may also work with | ||||
| little effort. | ||||
| @ -148,9 +148,9 @@ of banks, as set via the KVM_X86_SETUP_MCE ioctl. | ||||
| 
 | ||||
| 4.4 KVM_CHECK_EXTENSION | ||||
| 
 | ||||
| Capability: basic | ||||
| Capability: basic, KVM_CAP_CHECK_EXTENSION_VM for vm ioctl | ||||
| Architectures: all | ||||
| Type: system ioctl | ||||
| Type: system ioctl, vm ioctl | ||||
| Parameters: extension identifier (KVM_CAP_*) | ||||
| Returns: 0 if unsupported; 1 (or some other positive integer) if supported | ||||
| 
 | ||||
| @ -160,6 +160,9 @@ receives an integer that describes the extension availability. | ||||
| Generally 0 means no and 1 means yes, but some extensions may report | ||||
| additional information in the integer return value. | ||||
| 
 | ||||
| Based on their initialization different VMs may have different capabilities. | ||||
| It is thus encouraged to use the vm ioctl to query for capabilities (available | ||||
| with KVM_CAP_CHECK_EXTENSION_VM on the vm fd) | ||||
| 
 | ||||
| 4.5 KVM_GET_VCPU_MMAP_SIZE | ||||
| 
 | ||||
| @ -1892,7 +1895,8 @@ registers, find a list below: | ||||
|   PPC   | KVM_REG_PPC_PID               | 64 | ||||
|   PPC   | KVM_REG_PPC_ACOP              | 64 | ||||
|   PPC   | KVM_REG_PPC_VRSAVE            | 32 | ||||
|   PPC   | KVM_REG_PPC_LPCR              | 64 | ||||
|   PPC   | KVM_REG_PPC_LPCR              | 32 | ||||
|   PPC   | KVM_REG_PPC_LPCR_64           | 64 | ||||
|   PPC   | KVM_REG_PPC_PPR               | 64 | ||||
|   PPC   | KVM_REG_PPC_ARCH_COMPAT       | 32 | ||||
|   PPC   | KVM_REG_PPC_DABRX             | 32 | ||||
| @ -2677,8 +2681,8 @@ The 'data' member contains, in its first 'len' bytes, the value as it would | ||||
| appear if the VCPU performed a load or store of the appropriate width directly | ||||
| to the byte array. | ||||
| 
 | ||||
| NOTE: For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_DCR, | ||||
|       KVM_EXIT_PAPR and KVM_EXIT_EPR the corresponding | ||||
| NOTE: For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR and | ||||
|       KVM_EXIT_EPR the corresponding | ||||
| operations are complete (and guest state is consistent) only after userspace | ||||
| has re-entered the kernel with KVM_RUN.  The kernel side will first finish | ||||
| incomplete operations and then check for pending signals.  Userspace | ||||
| @ -2749,7 +2753,7 @@ Principles of Operation Book in the Chapter for Dynamic Address Translation | ||||
| 			__u8  is_write; | ||||
| 		} dcr; | ||||
| 
 | ||||
| powerpc specific. | ||||
| Deprecated - was used for 440 KVM. | ||||
| 
 | ||||
| 		/* KVM_EXIT_OSI */ | ||||
| 		struct { | ||||
| @ -2931,8 +2935,8 @@ The fields in each entry are defined as follows: | ||||
|          this function/index combination | ||||
| 
 | ||||
| 
 | ||||
| 6. Capabilities that can be enabled | ||||
| ----------------------------------- | ||||
| 6. Capabilities that can be enabled on vCPUs | ||||
| -------------------------------------------- | ||||
| 
 | ||||
| There are certain capabilities that change the behavior of the virtual CPU or | ||||
| the virtual machine when enabled. To enable them, please see section 4.37. | ||||
| @ -3091,3 +3095,43 @@ Parameters: none | ||||
| 
 | ||||
| This capability enables the in-kernel irqchip for s390. Please refer to | ||||
| "4.24 KVM_CREATE_IRQCHIP" for details. | ||||
| 
 | ||||
| 7. Capabilities that can be enabled on VMs | ||||
| ------------------------------------------ | ||||
| 
 | ||||
| There are certain capabilities that change the behavior of the virtual | ||||
| machine when enabled. To enable them, please see section 4.37. Below | ||||
| you can find a list of capabilities and what their effect on the VM | ||||
| is when enabling them. | ||||
| 
 | ||||
| The following information is provided along with the description: | ||||
| 
 | ||||
|   Architectures: which instruction set architectures provide this ioctl. | ||||
|       x86 includes both i386 and x86_64. | ||||
| 
 | ||||
|   Parameters: what parameters are accepted by the capability. | ||||
| 
 | ||||
|   Returns: the return value.  General error numbers (EBADF, ENOMEM, EINVAL) | ||||
|       are not detailed, but errors with specific meanings are. | ||||
| 
 | ||||
| 
 | ||||
| 7.1 KVM_CAP_PPC_ENABLE_HCALL | ||||
| 
 | ||||
| Architectures: ppc | ||||
| Parameters: args[0] is the sPAPR hcall number | ||||
| 	    args[1] is 0 to disable, 1 to enable in-kernel handling | ||||
| 
 | ||||
| This capability controls whether individual sPAPR hypercalls (hcalls) | ||||
| get handled by the kernel or not.  Enabling or disabling in-kernel | ||||
| handling of an hcall is effective across the VM.  On creation, an | ||||
| initial set of hcalls are enabled for in-kernel handling, which | ||||
| consists of those hcalls for which in-kernel handlers were implemented | ||||
| before this capability was implemented.  If disabled, the kernel will | ||||
| not to attempt to handle the hcall, but will always exit to userspace | ||||
| to handle it.  Note that it may not make sense to enable some and | ||||
| disable others of a group of related hcalls, but KVM does not prevent | ||||
| userspace from doing that. | ||||
| 
 | ||||
| If the hcall number specified is not one that has an in-kernel | ||||
| implementation, the KVM_ENABLE_CAP ioctl will fail with an EINVAL | ||||
| error. | ||||
|  | ||||
| @ -174,7 +174,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 	int r; | ||||
| 	switch (ext) { | ||||
|  | ||||
| @ -190,7 +190,7 @@ void kvm_arch_check_processor_compat(void *rtn) | ||||
| 	*(int *)rtn = 0; | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 
 | ||||
| 	int r; | ||||
|  | ||||
| @ -886,7 +886,7 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) | ||||
| 	return VM_FAULT_SIGBUS; | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
|  | ||||
| @ -202,9 +202,7 @@ config PPC_EARLY_DEBUG_BEAT | ||||
| 
 | ||||
| config PPC_EARLY_DEBUG_44x | ||||
| 	bool "Early serial debugging for IBM/AMCC 44x CPUs" | ||||
| 	# PPC_EARLY_DEBUG on 440 leaves AS=1 mappings above the TLB high water | ||||
| 	# mark, which doesn't work with current 440 KVM. | ||||
| 	depends on 44x && !KVM | ||||
| 	depends on 44x | ||||
| 	help | ||||
| 	  Select this to enable early debugging for IBM 44x chips via the | ||||
| 	  inbuilt serial port.  If you enable this, ensure you set | ||||
|  | ||||
| @ -127,4 +127,3 @@ CONFIG_CRYPTO_PCBC=y | ||||
| # CONFIG_CRYPTO_ANSI_CPRNG is not set | ||||
| # CONFIG_CRYPTO_HW is not set | ||||
| CONFIG_VIRTUALIZATION=y | ||||
| CONFIG_KVM_440=y | ||||
|  | ||||
| @ -34,10 +34,14 @@ | ||||
| #define PPC_MIN_STKFRM	112 | ||||
| 
 | ||||
| #ifdef __BIG_ENDIAN__ | ||||
| #define LWZX_BE	stringify_in_c(lwzx) | ||||
| #define LDX_BE	stringify_in_c(ldx) | ||||
| #define STWX_BE	stringify_in_c(stwx) | ||||
| #define STDX_BE	stringify_in_c(stdx) | ||||
| #else | ||||
| #define LWZX_BE	stringify_in_c(lwbrx) | ||||
| #define LDX_BE	stringify_in_c(ldbrx) | ||||
| #define STWX_BE	stringify_in_c(stwbrx) | ||||
| #define STDX_BE	stringify_in_c(stdbrx) | ||||
| #endif | ||||
| 
 | ||||
|  | ||||
| @ -3,6 +3,7 @@ | ||||
| 
 | ||||
| #ifdef __KERNEL__ | ||||
| 
 | ||||
| #include <asm/reg.h> | ||||
| 
 | ||||
| /* bytes per L1 cache line */ | ||||
| #if defined(CONFIG_8xx) || defined(CONFIG_403GCX) | ||||
| @ -39,6 +40,12 @@ struct ppc64_caches { | ||||
| }; | ||||
| 
 | ||||
| extern struct ppc64_caches ppc64_caches; | ||||
| 
 | ||||
| static inline void logmpp(u64 x) | ||||
| { | ||||
| 	asm volatile(PPC_LOGMPP(R1) : : "r" (x)); | ||||
| } | ||||
| 
 | ||||
| #endif /* __powerpc64__ && ! __ASSEMBLY__ */ | ||||
| 
 | ||||
| #if defined(__ASSEMBLY__) | ||||
|  | ||||
| @ -279,6 +279,12 @@ | ||||
| #define H_GET_24X7_DATA		0xF07C | ||||
| #define H_GET_PERF_COUNTER_INFO	0xF080 | ||||
| 
 | ||||
| /* Values for 2nd argument to H_SET_MODE */ | ||||
| #define H_SET_MODE_RESOURCE_SET_CIABR		1 | ||||
| #define H_SET_MODE_RESOURCE_SET_DAWR		2 | ||||
| #define H_SET_MODE_RESOURCE_ADDR_TRANS_MODE	3 | ||||
| #define H_SET_MODE_RESOURCE_LE			4 | ||||
| 
 | ||||
| #ifndef __ASSEMBLY__ | ||||
| 
 | ||||
| /**
 | ||||
|  | ||||
| @ -1,67 +0,0 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2008 | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #ifndef __ASM_44X_H__ | ||||
| #define __ASM_44X_H__ | ||||
| 
 | ||||
| #include <linux/kvm_host.h> | ||||
| 
 | ||||
| #define PPC44x_TLB_SIZE 64 | ||||
| 
 | ||||
| /* If the guest is expecting it, this can be as large as we like; we'd just
 | ||||
|  * need to find some way of advertising it. */ | ||||
| #define KVM44x_GUEST_TLB_SIZE 64 | ||||
| 
 | ||||
| struct kvmppc_44x_tlbe { | ||||
| 	u32 tid; /* Only the low 8 bits are used. */ | ||||
| 	u32 word0; | ||||
| 	u32 word1; | ||||
| 	u32 word2; | ||||
| }; | ||||
| 
 | ||||
| struct kvmppc_44x_shadow_ref { | ||||
| 	struct page *page; | ||||
| 	u16 gtlb_index; | ||||
| 	u8 writeable; | ||||
| 	u8 tid; | ||||
| }; | ||||
| 
 | ||||
| struct kvmppc_vcpu_44x { | ||||
| 	/* Unmodified copy of the guest's TLB. */ | ||||
| 	struct kvmppc_44x_tlbe guest_tlb[KVM44x_GUEST_TLB_SIZE]; | ||||
| 
 | ||||
| 	/* References to guest pages in the hardware TLB. */ | ||||
| 	struct kvmppc_44x_shadow_ref shadow_refs[PPC44x_TLB_SIZE]; | ||||
| 
 | ||||
| 	/* State of the shadow TLB at guest context switch time. */ | ||||
| 	struct kvmppc_44x_tlbe shadow_tlb[PPC44x_TLB_SIZE]; | ||||
| 	u8 shadow_tlb_mod[PPC44x_TLB_SIZE]; | ||||
| 
 | ||||
| 	struct kvm_vcpu vcpu; | ||||
| }; | ||||
| 
 | ||||
| static inline struct kvmppc_vcpu_44x *to_44x(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return container_of(vcpu, struct kvmppc_vcpu_44x, vcpu); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_44x_tlb_put(struct kvm_vcpu *vcpu); | ||||
| void kvmppc_44x_tlb_load(struct kvm_vcpu *vcpu); | ||||
| 
 | ||||
| #endif /* __ASM_44X_H__ */ | ||||
| @ -33,7 +33,6 @@ | ||||
| /* IVPR must be 64KiB-aligned. */ | ||||
| #define VCPU_SIZE_ORDER 4 | ||||
| #define VCPU_SIZE_LOG   (VCPU_SIZE_ORDER + 12) | ||||
| #define VCPU_TLB_PGSZ   PPC44x_TLB_64K | ||||
| #define VCPU_SIZE_BYTES (1<<VCPU_SIZE_LOG) | ||||
| 
 | ||||
| #define BOOKE_INTERRUPT_CRITICAL 0 | ||||
| @ -131,6 +130,7 @@ | ||||
| #define BOOK3S_HFLAG_NATIVE_PS			0x8 | ||||
| #define BOOK3S_HFLAG_MULTI_PGSIZE		0x10 | ||||
| #define BOOK3S_HFLAG_NEW_TLBIE			0x20 | ||||
| #define BOOK3S_HFLAG_SPLIT_HACK			0x40 | ||||
| 
 | ||||
| #define RESUME_FLAG_NV          (1<<0)  /* Reload guest nonvolatile state? */ | ||||
| #define RESUME_FLAG_HOST        (1<<1)  /* Resume host? */ | ||||
|  | ||||
| @ -83,8 +83,6 @@ struct kvmppc_vcpu_book3s { | ||||
| 	u64 sdr1; | ||||
| 	u64 hior; | ||||
| 	u64 msr_mask; | ||||
| 	u64 purr_offset; | ||||
| 	u64 spurr_offset; | ||||
| #ifdef CONFIG_PPC_BOOK3S_32 | ||||
| 	u32 vsid_pool[VSID_POOL_SIZE]; | ||||
| 	u32 vsid_next; | ||||
| @ -148,9 +146,10 @@ extern void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache * | ||||
| extern int kvmppc_mmu_hpte_sysinit(void); | ||||
| extern void kvmppc_mmu_hpte_sysexit(void); | ||||
| extern int kvmppc_mmu_hv_init(void); | ||||
| extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc); | ||||
| 
 | ||||
| /* XXX remove this export when load_last_inst() is generic */ | ||||
| extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); | ||||
| extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); | ||||
| extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec); | ||||
| extern void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu, | ||||
| 					  unsigned int vec); | ||||
| @ -159,13 +158,13 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, | ||||
| 			   bool upper, u32 val); | ||||
| extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr); | ||||
| extern int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu); | ||||
| extern pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool writing, | ||||
| extern pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, bool writing, | ||||
| 			bool *writable); | ||||
| extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *rev, | ||||
| 			unsigned long *rmap, long pte_index, int realmode); | ||||
| extern void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep, | ||||
| extern void kvmppc_invalidate_hpte(struct kvm *kvm, __be64 *hptep, | ||||
| 			unsigned long pte_index); | ||||
| void kvmppc_clear_ref_hpte(struct kvm *kvm, unsigned long *hptep, | ||||
| void kvmppc_clear_ref_hpte(struct kvm *kvm, __be64 *hptep, | ||||
| 			unsigned long pte_index); | ||||
| extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr, | ||||
| 			unsigned long *nb_ret); | ||||
| @ -183,12 +182,16 @@ extern long kvmppc_hv_get_dirty_log(struct kvm *kvm, | ||||
| 			struct kvm_memory_slot *memslot, unsigned long *map); | ||||
| extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, | ||||
| 			unsigned long mask); | ||||
| extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr); | ||||
| 
 | ||||
| extern void kvmppc_entry_trampoline(void); | ||||
| extern void kvmppc_hv_entry_trampoline(void); | ||||
| extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst); | ||||
| extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst); | ||||
| extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd); | ||||
| extern void kvmppc_pr_init_default_hcalls(struct kvm *kvm); | ||||
| extern int kvmppc_hcall_impl_pr(unsigned long cmd); | ||||
| extern int kvmppc_hcall_impl_hv_realmode(unsigned long cmd); | ||||
| extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu, | ||||
| 				 struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, | ||||
| @ -274,32 +277,6 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) | ||||
| 	return (kvmppc_get_msr(vcpu) & MSR_LE) != (MSR_KERNEL & MSR_LE); | ||||
| } | ||||
| 
 | ||||
| static inline u32 kvmppc_get_last_inst_internal(struct kvm_vcpu *vcpu, ulong pc) | ||||
| { | ||||
| 	/* Load the instruction manually if it failed to do so in the
 | ||||
| 	 * exit path */ | ||||
| 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) | ||||
| 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false); | ||||
| 
 | ||||
| 	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) : | ||||
| 		vcpu->arch.last_inst; | ||||
| } | ||||
| 
 | ||||
| static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return kvmppc_get_last_inst_internal(vcpu, kvmppc_get_pc(vcpu)); | ||||
| } | ||||
| 
 | ||||
| /*
 | ||||
|  * Like kvmppc_get_last_inst(), but for fetching a sc instruction. | ||||
|  * Because the sc instruction sets SRR0 to point to the following | ||||
|  * instruction, we have to fetch from pc - 4. | ||||
|  */ | ||||
| static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return kvmppc_get_last_inst_internal(vcpu, kvmppc_get_pc(vcpu) - 4); | ||||
| } | ||||
| 
 | ||||
| static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return vcpu->arch.fault_dar; | ||||
| @ -310,6 +287,13 @@ static inline bool is_kvmppc_resume_guest(int r) | ||||
| 	return (r == RESUME_GUEST || r == RESUME_GUEST_NV); | ||||
| } | ||||
| 
 | ||||
| static inline bool is_kvmppc_hv_enabled(struct kvm *kvm); | ||||
| static inline bool kvmppc_supports_magic_page(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	/* Only PR KVM supports the magic page */ | ||||
| 	return !is_kvmppc_hv_enabled(vcpu->kvm); | ||||
| } | ||||
| 
 | ||||
| /* Magic register values loaded into r3 and r4 before the 'sc' assembly
 | ||||
|  * instruction for the OSI hypercalls */ | ||||
| #define OSI_SC_MAGIC_R3			0x113724FA | ||||
| @ -322,4 +306,7 @@ static inline bool is_kvmppc_resume_guest(int r) | ||||
| /* LPIDs we support with this build -- runtime limit may be lower */ | ||||
| #define KVMPPC_NR_LPIDS			(LPID_RSVD + 1) | ||||
| 
 | ||||
| #define SPLIT_HACK_MASK			0xff000000 | ||||
| #define SPLIT_HACK_OFFS			0xfb000000 | ||||
| 
 | ||||
| #endif /* __ASM_KVM_BOOK3S_H__ */ | ||||
|  | ||||
| @ -59,20 +59,29 @@ extern unsigned long kvm_rma_pages; | ||||
| /* These bits are reserved in the guest view of the HPTE */ | ||||
| #define HPTE_GR_RESERVED	HPTE_GR_MODIFIED | ||||
| 
 | ||||
| static inline long try_lock_hpte(unsigned long *hpte, unsigned long bits) | ||||
| static inline long try_lock_hpte(__be64 *hpte, unsigned long bits) | ||||
| { | ||||
| 	unsigned long tmp, old; | ||||
| 	__be64 be_lockbit, be_bits; | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * We load/store in native endian, but the HTAB is in big endian. If | ||||
| 	 * we byte swap all data we apply on the PTE we're implicitly correct | ||||
| 	 * again. | ||||
| 	 */ | ||||
| 	be_lockbit = cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 	be_bits = cpu_to_be64(bits); | ||||
| 
 | ||||
| 	asm volatile("	ldarx	%0,0,%2\n" | ||||
| 		     "	and.	%1,%0,%3\n" | ||||
| 		     "	bne	2f\n" | ||||
| 		     "	ori	%0,%0,%4\n" | ||||
| 		     "	or	%0,%0,%4\n" | ||||
| 		     "  stdcx.	%0,0,%2\n" | ||||
| 		     "	beq+	2f\n" | ||||
| 		     "	mr	%1,%3\n" | ||||
| 		     "2:	isync" | ||||
| 		     : "=&r" (tmp), "=&r" (old) | ||||
| 		     : "r" (hpte), "r" (bits), "i" (HPTE_V_HVLOCK) | ||||
| 		     : "r" (hpte), "r" (be_bits), "r" (be_lockbit) | ||||
| 		     : "cc", "memory"); | ||||
| 	return old == 0; | ||||
| } | ||||
| @ -110,16 +119,12 @@ static inline int __hpte_actual_psize(unsigned int lp, int psize) | ||||
| static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r, | ||||
| 					     unsigned long pte_index) | ||||
| { | ||||
| 	int b_psize, a_psize; | ||||
| 	int b_psize = MMU_PAGE_4K, a_psize = MMU_PAGE_4K; | ||||
| 	unsigned int penc; | ||||
| 	unsigned long rb = 0, va_low, sllp; | ||||
| 	unsigned int lp = (r >> LP_SHIFT) & ((1 << LP_BITS) - 1); | ||||
| 
 | ||||
| 	if (!(v & HPTE_V_LARGE)) { | ||||
| 		/* both base and actual psize is 4k */ | ||||
| 		b_psize = MMU_PAGE_4K; | ||||
| 		a_psize = MMU_PAGE_4K; | ||||
| 	} else { | ||||
| 	if (v & HPTE_V_LARGE) { | ||||
| 		for (b_psize = 0; b_psize < MMU_PAGE_COUNT; b_psize++) { | ||||
| 
 | ||||
| 			/* valid entries have a shift value */ | ||||
| @ -142,6 +147,8 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r, | ||||
| 	 */ | ||||
| 	/* This covers 14..54 bits of va*/ | ||||
| 	rb = (v & ~0x7fUL) << 16;		/* AVA field */ | ||||
| 
 | ||||
| 	rb |= v >> (62 - 8);			/*  B field */ | ||||
| 	/*
 | ||||
| 	 * AVA in v had cleared lower 23 bits. We need to derive | ||||
| 	 * that from pteg index | ||||
| @ -172,10 +179,10 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r, | ||||
| 	{ | ||||
| 		int aval_shift; | ||||
| 		/*
 | ||||
| 		 * remaining 7bits of AVA/LP fields | ||||
| 		 * remaining bits of AVA/LP fields | ||||
| 		 * Also contain the rr bits of LP | ||||
| 		 */ | ||||
| 		rb |= (va_low & 0x7f) << 16; | ||||
| 		rb |= (va_low << mmu_psize_defs[b_psize].shift) & 0x7ff000; | ||||
| 		/*
 | ||||
| 		 * Now clear not needed LP bits based on actual psize | ||||
| 		 */ | ||||
|  | ||||
| @ -69,11 +69,6 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) | ||||
| 	return false; | ||||
| } | ||||
| 
 | ||||
| static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return vcpu->arch.last_inst; | ||||
| } | ||||
| 
 | ||||
| static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) | ||||
| { | ||||
| 	vcpu->arch.ctr = val; | ||||
| @ -108,4 +103,14 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return vcpu->arch.fault_dear; | ||||
| } | ||||
| 
 | ||||
| static inline bool kvmppc_supports_magic_page(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	/* Magic page is only supported on e500v2 */ | ||||
| #ifdef CONFIG_KVM_E500V2 | ||||
| 	return true; | ||||
| #else | ||||
| 	return false; | ||||
| #endif | ||||
| } | ||||
| #endif /* __ASM_KVM_BOOKE_H__ */ | ||||
|  | ||||
| @ -34,6 +34,7 @@ | ||||
| #include <asm/processor.h> | ||||
| #include <asm/page.h> | ||||
| #include <asm/cacheflush.h> | ||||
| #include <asm/hvcall.h> | ||||
| 
 | ||||
| #define KVM_MAX_VCPUS		NR_CPUS | ||||
| #define KVM_MAX_VCORES		NR_CPUS | ||||
| @ -48,7 +49,6 @@ | ||||
| #define KVM_NR_IRQCHIPS          1 | ||||
| #define KVM_IRQCHIP_NUM_PINS     256 | ||||
| 
 | ||||
| #if !defined(CONFIG_KVM_440) | ||||
| #include <linux/mmu_notifier.h> | ||||
| 
 | ||||
| #define KVM_ARCH_WANT_MMU_NOTIFIER | ||||
| @ -61,8 +61,6 @@ extern int kvm_age_hva(struct kvm *kvm, unsigned long hva); | ||||
| extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); | ||||
| extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); | ||||
| 
 | ||||
| #endif | ||||
| 
 | ||||
| #define HPTEG_CACHE_NUM			(1 << 15) | ||||
| #define HPTEG_HASH_BITS_PTE		13 | ||||
| #define HPTEG_HASH_BITS_PTE_LONG	12 | ||||
| @ -96,7 +94,6 @@ struct kvm_vm_stat { | ||||
| struct kvm_vcpu_stat { | ||||
| 	u32 sum_exits; | ||||
| 	u32 mmio_exits; | ||||
| 	u32 dcr_exits; | ||||
| 	u32 signal_exits; | ||||
| 	u32 light_exits; | ||||
| 	/* Account for special types of light exits: */ | ||||
| @ -113,22 +110,21 @@ struct kvm_vcpu_stat { | ||||
| 	u32 halt_wakeup; | ||||
| 	u32 dbell_exits; | ||||
| 	u32 gdbell_exits; | ||||
| 	u32 ld; | ||||
| 	u32 st; | ||||
| #ifdef CONFIG_PPC_BOOK3S | ||||
| 	u32 pf_storage; | ||||
| 	u32 pf_instruc; | ||||
| 	u32 sp_storage; | ||||
| 	u32 sp_instruc; | ||||
| 	u32 queue_intr; | ||||
| 	u32 ld; | ||||
| 	u32 ld_slow; | ||||
| 	u32 st; | ||||
| 	u32 st_slow; | ||||
| #endif | ||||
| }; | ||||
| 
 | ||||
| enum kvm_exit_types { | ||||
| 	MMIO_EXITS, | ||||
| 	DCR_EXITS, | ||||
| 	SIGNAL_EXITS, | ||||
| 	ITLB_REAL_MISS_EXITS, | ||||
| 	ITLB_VIRT_MISS_EXITS, | ||||
| @ -254,7 +250,6 @@ struct kvm_arch { | ||||
| 	atomic_t hpte_mod_interest; | ||||
| 	spinlock_t slot_phys_lock; | ||||
| 	cpumask_t need_tlb_flush; | ||||
| 	struct kvmppc_vcore *vcores[KVM_MAX_VCORES]; | ||||
| 	int hpt_cma_alloc; | ||||
| #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ | ||||
| #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE | ||||
| @ -263,6 +258,7 @@ struct kvm_arch { | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	struct list_head spapr_tce_tables; | ||||
| 	struct list_head rtas_tokens; | ||||
| 	DECLARE_BITMAP(enabled_hcalls, MAX_HCALL_OPCODE/4 + 1); | ||||
| #endif | ||||
| #ifdef CONFIG_KVM_MPIC | ||||
| 	struct openpic *mpic; | ||||
| @ -271,6 +267,10 @@ struct kvm_arch { | ||||
| 	struct kvmppc_xics *xics; | ||||
| #endif | ||||
| 	struct kvmppc_ops *kvm_ops; | ||||
| #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE | ||||
| 	/* This array can grow quite large, keep it at the end */ | ||||
| 	struct kvmppc_vcore *vcores[KVM_MAX_VCORES]; | ||||
| #endif | ||||
| }; | ||||
| 
 | ||||
| /*
 | ||||
| @ -305,6 +305,8 @@ struct kvmppc_vcore { | ||||
| 	u32 arch_compat; | ||||
| 	ulong pcr; | ||||
| 	ulong dpdes;		/* doorbell state (POWER8) */ | ||||
| 	void *mpp_buffer; /* Micro Partition Prefetch buffer */ | ||||
| 	bool mpp_buffer_is_valid; | ||||
| }; | ||||
| 
 | ||||
| #define VCORE_ENTRY_COUNT(vc)	((vc)->entry_exit_count & 0xff) | ||||
| @ -503,8 +505,10 @@ struct kvm_vcpu_arch { | ||||
| #ifdef CONFIG_BOOKE | ||||
| 	u32 decar; | ||||
| #endif | ||||
| 	u32 tbl; | ||||
| 	u32 tbu; | ||||
| 	/* Time base value when we entered the guest */ | ||||
| 	u64 entry_tb; | ||||
| 	u64 entry_vtb; | ||||
| 	u64 entry_ic; | ||||
| 	u32 tcr; | ||||
| 	ulong tsr; /* we need to perform set/clr_bits() which requires ulong */ | ||||
| 	u32 ivor[64]; | ||||
| @ -580,6 +584,8 @@ struct kvm_vcpu_arch { | ||||
| 	u32 mmucfg; | ||||
| 	u32 eptcfg; | ||||
| 	u32 epr; | ||||
| 	u64 sprg9; | ||||
| 	u32 pwrmgtcr0; | ||||
| 	u32 crit_save; | ||||
| 	/* guest debug registers*/ | ||||
| 	struct debug_reg dbg_reg; | ||||
| @ -593,8 +599,6 @@ struct kvm_vcpu_arch { | ||||
| 	u8 io_gpr; /* GPR used as IO source/target */ | ||||
| 	u8 mmio_is_bigendian; | ||||
| 	u8 mmio_sign_extend; | ||||
| 	u8 dcr_needed; | ||||
| 	u8 dcr_is_write; | ||||
| 	u8 osi_needed; | ||||
| 	u8 osi_enabled; | ||||
| 	u8 papr_enabled; | ||||
|  | ||||
| @ -41,12 +41,26 @@ | ||||
| enum emulation_result { | ||||
| 	EMULATE_DONE,         /* no further processing */ | ||||
| 	EMULATE_DO_MMIO,      /* kvm_run filled with MMIO request */ | ||||
| 	EMULATE_DO_DCR,       /* kvm_run filled with DCR request */ | ||||
| 	EMULATE_FAIL,         /* can't emulate this instruction */ | ||||
| 	EMULATE_AGAIN,        /* something went wrong. go again */ | ||||
| 	EMULATE_EXIT_USER,    /* emulation requires exit to user-space */ | ||||
| }; | ||||
| 
 | ||||
| enum instruction_type { | ||||
| 	INST_GENERIC, | ||||
| 	INST_SC,		/* system call */ | ||||
| }; | ||||
| 
 | ||||
| enum xlate_instdata { | ||||
| 	XLATE_INST,		/* translate instruction address */ | ||||
| 	XLATE_DATA		/* translate data address */ | ||||
| }; | ||||
| 
 | ||||
| enum xlate_readwrite { | ||||
| 	XLATE_READ,		/* check for read permissions */ | ||||
| 	XLATE_WRITE		/* check for write permissions */ | ||||
| }; | ||||
| 
 | ||||
| extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu); | ||||
| extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_handler_highmem(void); | ||||
| @ -62,8 +76,16 @@ extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 			       u64 val, unsigned int bytes, | ||||
| 			       int is_default_endian); | ||||
| 
 | ||||
| extern int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, | ||||
| 				 enum instruction_type type, u32 *inst); | ||||
| 
 | ||||
| extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 		     bool data); | ||||
| extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 		     bool data); | ||||
| extern int kvmppc_emulate_instruction(struct kvm_run *run, | ||||
|                                       struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu); | ||||
| extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb); | ||||
| @ -86,6 +108,9 @@ extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index, | ||||
|                               gva_t eaddr); | ||||
| extern void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, | ||||
| 			enum xlate_instdata xlid, enum xlate_readwrite xlrw, | ||||
| 			struct kvmppc_pte *pte); | ||||
| 
 | ||||
| extern struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, | ||||
|                                                 unsigned int id); | ||||
| @ -106,6 +131,14 @@ extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, | ||||
|                                        struct kvm_interrupt *irq); | ||||
| extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu, ulong dear_flags, | ||||
| 					ulong esr_flags); | ||||
| extern void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, | ||||
| 					   ulong dear_flags, | ||||
| 					   ulong esr_flags); | ||||
| extern void kvmppc_core_queue_itlb_miss(struct kvm_vcpu *vcpu); | ||||
| extern void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, | ||||
| 					   ulong esr_flags); | ||||
| extern void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_core_check_requests(struct kvm_vcpu *vcpu); | ||||
| 
 | ||||
| @ -228,12 +261,35 @@ struct kvmppc_ops { | ||||
| 	void (*fast_vcpu_kick)(struct kvm_vcpu *vcpu); | ||||
| 	long (*arch_vm_ioctl)(struct file *filp, unsigned int ioctl, | ||||
| 			      unsigned long arg); | ||||
| 
 | ||||
| 	int (*hcall_implemented)(unsigned long hcall); | ||||
| }; | ||||
| 
 | ||||
| extern struct kvmppc_ops *kvmppc_hv_ops; | ||||
| extern struct kvmppc_ops *kvmppc_pr_ops; | ||||
| 
 | ||||
| static inline int kvmppc_get_last_inst(struct kvm_vcpu *vcpu, | ||||
| 					enum instruction_type type, u32 *inst) | ||||
| { | ||||
| 	int ret = EMULATE_DONE; | ||||
| 	u32 fetched_inst; | ||||
| 
 | ||||
| 	/* Load the instruction manually if it failed to do so in the
 | ||||
| 	 * exit path */ | ||||
| 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) | ||||
| 		ret = kvmppc_load_last_inst(vcpu, type, &vcpu->arch.last_inst); | ||||
| 
 | ||||
| 	/*  Write fetch_failed unswapped if the fetch failed */ | ||||
| 	if (ret == EMULATE_DONE) | ||||
| 		fetched_inst = kvmppc_need_byteswap(vcpu) ? | ||||
| 				swab32(vcpu->arch.last_inst) : | ||||
| 				vcpu->arch.last_inst; | ||||
| 	else | ||||
| 		fetched_inst = vcpu->arch.last_inst; | ||||
| 
 | ||||
| 	*inst = fetched_inst; | ||||
| 	return ret; | ||||
| } | ||||
| 
 | ||||
| static inline bool is_kvmppc_hv_enabled(struct kvm *kvm) | ||||
| { | ||||
| 	return kvm->arch.kvm_ops == kvmppc_hv_ops; | ||||
| @ -392,6 +448,17 @@ static inline int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd) | ||||
| 	{ return 0; } | ||||
| #endif | ||||
| 
 | ||||
| static inline unsigned long kvmppc_get_epr(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	return mfspr(SPRN_GEPR); | ||||
| #elif defined(CONFIG_BOOKE) | ||||
| 	return vcpu->arch.epr; | ||||
| #else | ||||
| 	return 0; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static inline void kvmppc_set_epr(struct kvm_vcpu *vcpu, u32 epr) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| @ -472,8 +539,20 @@ static inline bool kvmppc_shared_big_endian(struct kvm_vcpu *vcpu) | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| #define SPRNG_WRAPPER_GET(reg, bookehv_spr)				\ | ||||
| static inline ulong kvmppc_get_##reg(struct kvm_vcpu *vcpu)		\ | ||||
| {									\ | ||||
| 	return mfspr(bookehv_spr);					\ | ||||
| }									\ | ||||
| 
 | ||||
| #define SPRNG_WRAPPER_SET(reg, bookehv_spr)				\ | ||||
| static inline void kvmppc_set_##reg(struct kvm_vcpu *vcpu, ulong val)	\ | ||||
| {									\ | ||||
| 	mtspr(bookehv_spr, val);						\ | ||||
| }									\ | ||||
| 
 | ||||
| #define SHARED_WRAPPER_GET(reg, size)					\ | ||||
| static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu)	\ | ||||
| static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu)		\ | ||||
| {									\ | ||||
| 	if (kvmppc_shared_big_endian(vcpu))				\ | ||||
| 	       return be##size##_to_cpu(vcpu->arch.shared->reg);	\ | ||||
| @ -494,14 +573,31 @@ static inline void kvmppc_set_##reg(struct kvm_vcpu *vcpu, u##size val)	\ | ||||
| 	SHARED_WRAPPER_GET(reg, size)					\ | ||||
| 	SHARED_WRAPPER_SET(reg, size)					\ | ||||
| 
 | ||||
| #define SPRNG_WRAPPER(reg, bookehv_spr)					\ | ||||
| 	SPRNG_WRAPPER_GET(reg, bookehv_spr)				\ | ||||
| 	SPRNG_WRAPPER_SET(reg, bookehv_spr)				\ | ||||
| 
 | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 
 | ||||
| #define SHARED_SPRNG_WRAPPER(reg, size, bookehv_spr)			\ | ||||
| 	SPRNG_WRAPPER(reg, bookehv_spr)					\ | ||||
| 
 | ||||
| #else | ||||
| 
 | ||||
| #define SHARED_SPRNG_WRAPPER(reg, size, bookehv_spr)			\ | ||||
| 	SHARED_WRAPPER(reg, size)					\ | ||||
| 
 | ||||
| #endif | ||||
| 
 | ||||
| SHARED_WRAPPER(critical, 64) | ||||
| SHARED_WRAPPER(sprg0, 64) | ||||
| SHARED_WRAPPER(sprg1, 64) | ||||
| SHARED_WRAPPER(sprg2, 64) | ||||
| SHARED_WRAPPER(sprg3, 64) | ||||
| SHARED_WRAPPER(srr0, 64) | ||||
| SHARED_WRAPPER(srr1, 64) | ||||
| SHARED_WRAPPER(dar, 64) | ||||
| SHARED_SPRNG_WRAPPER(sprg0, 64, SPRN_GSPRG0) | ||||
| SHARED_SPRNG_WRAPPER(sprg1, 64, SPRN_GSPRG1) | ||||
| SHARED_SPRNG_WRAPPER(sprg2, 64, SPRN_GSPRG2) | ||||
| SHARED_SPRNG_WRAPPER(sprg3, 64, SPRN_GSPRG3) | ||||
| SHARED_SPRNG_WRAPPER(srr0, 64, SPRN_GSRR0) | ||||
| SHARED_SPRNG_WRAPPER(srr1, 64, SPRN_GSRR1) | ||||
| SHARED_SPRNG_WRAPPER(dar, 64, SPRN_GDEAR) | ||||
| SHARED_SPRNG_WRAPPER(esr, 64, SPRN_GESR) | ||||
| SHARED_WRAPPER_GET(msr, 64) | ||||
| static inline void kvmppc_set_msr_fast(struct kvm_vcpu *vcpu, u64 val) | ||||
| { | ||||
|  | ||||
| @ -40,7 +40,11 @@ | ||||
| 
 | ||||
| /* MAS registers bit definitions */ | ||||
| 
 | ||||
| #define MAS0_TLBSEL(x)		(((x) << 28) & 0x30000000) | ||||
| #define MAS0_TLBSEL_MASK	0x30000000 | ||||
| #define MAS0_TLBSEL_SHIFT	28 | ||||
| #define MAS0_TLBSEL(x)		(((x) << MAS0_TLBSEL_SHIFT) & MAS0_TLBSEL_MASK) | ||||
| #define MAS0_GET_TLBSEL(mas0)	(((mas0) & MAS0_TLBSEL_MASK) >> \ | ||||
| 			MAS0_TLBSEL_SHIFT) | ||||
| #define MAS0_ESEL_MASK		0x0FFF0000 | ||||
| #define MAS0_ESEL_SHIFT		16 | ||||
| #define MAS0_ESEL(x)		(((x) << MAS0_ESEL_SHIFT) & MAS0_ESEL_MASK) | ||||
| @ -58,6 +62,7 @@ | ||||
| #define MAS1_TSIZE_MASK		0x00000f80 | ||||
| #define MAS1_TSIZE_SHIFT	7 | ||||
| #define MAS1_TSIZE(x)		(((x) << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK) | ||||
| #define MAS1_GET_TSIZE(mas1)	(((mas1) & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT) | ||||
| 
 | ||||
| #define MAS2_EPN		(~0xFFFUL) | ||||
| #define MAS2_X0			0x00000040 | ||||
| @ -86,6 +91,7 @@ | ||||
| #define MAS3_SPSIZE		0x0000003e | ||||
| #define MAS3_SPSIZE_SHIFT	1 | ||||
| 
 | ||||
| #define MAS4_TLBSEL_MASK	MAS0_TLBSEL_MASK | ||||
| #define MAS4_TLBSELD(x) 	MAS0_TLBSEL(x) | ||||
| #define MAS4_INDD		0x00008000	/* Default IND */ | ||||
| #define MAS4_TSIZED(x)		MAS1_TSIZE(x) | ||||
|  | ||||
| @ -139,6 +139,7 @@ | ||||
| #define PPC_INST_ISEL			0x7c00001e | ||||
| #define PPC_INST_ISEL_MASK		0xfc00003e | ||||
| #define PPC_INST_LDARX			0x7c0000a8 | ||||
| #define PPC_INST_LOGMPP			0x7c0007e4 | ||||
| #define PPC_INST_LSWI			0x7c0004aa | ||||
| #define PPC_INST_LSWX			0x7c00042a | ||||
| #define PPC_INST_LWARX			0x7c000028 | ||||
| @ -275,6 +276,20 @@ | ||||
| #define __PPC_EH(eh)	0 | ||||
| #endif | ||||
| 
 | ||||
| /* POWER8 Micro Partition Prefetch (MPP) parameters */ | ||||
| /* Address mask is common for LOGMPP instruction and MPPR SPR */ | ||||
| #define PPC_MPPE_ADDRESS_MASK 0xffffffffc000 | ||||
| 
 | ||||
| /* Bits 60 and 61 of MPP SPR should be set to one of the following */ | ||||
| /* Aborting the fetch is indeed setting 00 in the table size bits */ | ||||
| #define PPC_MPPR_FETCH_ABORT (0x0ULL << 60) | ||||
| #define PPC_MPPR_FETCH_WHOLE_TABLE (0x2ULL << 60) | ||||
| 
 | ||||
| /* Bits 54 and 55 of register for LOGMPP instruction should be set to: */ | ||||
| #define PPC_LOGMPP_LOG_L2 (0x02ULL << 54) | ||||
| #define PPC_LOGMPP_LOG_L2L3 (0x01ULL << 54) | ||||
| #define PPC_LOGMPP_LOG_ABORT (0x03ULL << 54) | ||||
| 
 | ||||
| /* Deal with instructions that older assemblers aren't aware of */ | ||||
| #define	PPC_DCBAL(a, b)		stringify_in_c(.long PPC_INST_DCBAL | \ | ||||
| 					__PPC_RA(a) | __PPC_RB(b)) | ||||
| @ -283,6 +298,8 @@ | ||||
| #define PPC_LDARX(t, a, b, eh)	stringify_in_c(.long PPC_INST_LDARX | \ | ||||
| 					___PPC_RT(t) | ___PPC_RA(a) | \ | ||||
| 					___PPC_RB(b) | __PPC_EH(eh)) | ||||
| #define PPC_LOGMPP(b)		stringify_in_c(.long PPC_INST_LOGMPP | \ | ||||
| 					__PPC_RB(b)) | ||||
| #define PPC_LWARX(t, a, b, eh)	stringify_in_c(.long PPC_INST_LWARX | \ | ||||
| 					___PPC_RT(t) | ___PPC_RA(a) | \ | ||||
| 					___PPC_RB(b) | __PPC_EH(eh)) | ||||
|  | ||||
| @ -225,6 +225,7 @@ | ||||
| #define   CTRL_TE	0x00c00000	/* thread enable */ | ||||
| #define   CTRL_RUNLATCH	0x1 | ||||
| #define SPRN_DAWR	0xB4 | ||||
| #define SPRN_MPPR	0xB8	/* Micro Partition Prefetch Register */ | ||||
| #define SPRN_RPR	0xBA	/* Relative Priority Register */ | ||||
| #define SPRN_CIABR	0xBB | ||||
| #define   CIABR_PRIV		0x3 | ||||
| @ -944,9 +945,6 @@ | ||||
|  *      readable variant for reads, which can avoid a fault | ||||
|  *      with KVM type virtualization. | ||||
|  * | ||||
|  *      (*) Under KVM, the host SPRG1 is used to point to | ||||
|  *      the current VCPU data structure | ||||
|  * | ||||
|  * 32-bit 8xx: | ||||
|  *	- SPRG0 scratch for exception vectors | ||||
|  *	- SPRG1 scratch for exception vectors | ||||
| @ -1203,6 +1201,15 @@ | ||||
| 				     : "r" ((unsigned long)(v)) \ | ||||
| 				     : "memory") | ||||
| 
 | ||||
| static inline unsigned long mfvtb (void) | ||||
| { | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		return mfspr(SPRN_VTB); | ||||
| #endif | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| #ifdef __powerpc64__ | ||||
| #if defined(CONFIG_PPC_CELL) || defined(CONFIG_PPC_FSL_BOOK3E) | ||||
| #define mftb()		({unsigned long rval;				\ | ||||
|  | ||||
| @ -102,6 +102,15 @@ static inline u64 get_rtc(void) | ||||
| 	return (u64)hi * 1000000000 + lo; | ||||
| } | ||||
| 
 | ||||
| static inline u64 get_vtb(void) | ||||
| { | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		return mfvtb(); | ||||
| #endif | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| #ifdef CONFIG_PPC64 | ||||
| static inline u64 get_tb(void) | ||||
| { | ||||
|  | ||||
| @ -548,6 +548,7 @@ struct kvm_get_htab_header { | ||||
| 
 | ||||
| #define KVM_REG_PPC_VRSAVE	(KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4) | ||||
| #define KVM_REG_PPC_LPCR	(KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5) | ||||
| #define KVM_REG_PPC_LPCR_64	(KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb5) | ||||
| #define KVM_REG_PPC_PPR		(KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb6) | ||||
| 
 | ||||
| /* Architecture compatibility level */ | ||||
| @ -555,6 +556,7 @@ struct kvm_get_htab_header { | ||||
| 
 | ||||
| #define KVM_REG_PPC_DABRX	(KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb8) | ||||
| #define KVM_REG_PPC_WORT	(KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb9) | ||||
| #define KVM_REG_PPC_SPRG9	(KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xba) | ||||
| 
 | ||||
| /* Transactional Memory checkpointed state:
 | ||||
|  * This is all GPRs, all VSX regs and a subset of SPRs | ||||
|  | ||||
| @ -493,6 +493,7 @@ int main(void) | ||||
| 	DEFINE(KVM_HOST_SDR1, offsetof(struct kvm, arch.host_sdr1)); | ||||
| 	DEFINE(KVM_TLBIE_LOCK, offsetof(struct kvm, arch.tlbie_lock)); | ||||
| 	DEFINE(KVM_NEED_FLUSH, offsetof(struct kvm, arch.need_tlb_flush.bits)); | ||||
| 	DEFINE(KVM_ENABLED_HCALLS, offsetof(struct kvm, arch.enabled_hcalls)); | ||||
| 	DEFINE(KVM_LPCR, offsetof(struct kvm, arch.lpcr)); | ||||
| 	DEFINE(KVM_RMOR, offsetof(struct kvm, arch.rmor)); | ||||
| 	DEFINE(KVM_VRMA_SLB_V, offsetof(struct kvm, arch.vrma_slb_v)); | ||||
| @ -667,6 +668,7 @@ int main(void) | ||||
| 	DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr)); | ||||
| 	DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr)); | ||||
| 	DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc)); | ||||
| 	DEFINE(VCPU_SPRG9, offsetof(struct kvm_vcpu, arch.sprg9)); | ||||
| 	DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst)); | ||||
| 	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear)); | ||||
| 	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr)); | ||||
|  | ||||
| @ -1,237 +0,0 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2008 | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #include <linux/kvm_host.h> | ||||
| #include <linux/slab.h> | ||||
| #include <linux/err.h> | ||||
| #include <linux/export.h> | ||||
| #include <linux/module.h> | ||||
| #include <linux/miscdevice.h> | ||||
| 
 | ||||
| #include <asm/reg.h> | ||||
| #include <asm/cputable.h> | ||||
| #include <asm/tlbflush.h> | ||||
| #include <asm/kvm_44x.h> | ||||
| #include <asm/kvm_ppc.h> | ||||
| 
 | ||||
| #include "44x_tlb.h" | ||||
| #include "booke.h" | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_load_44x(struct kvm_vcpu *vcpu, int cpu) | ||||
| { | ||||
| 	kvmppc_booke_vcpu_load(vcpu, cpu); | ||||
| 	kvmppc_44x_tlb_load(vcpu); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_put_44x(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	kvmppc_44x_tlb_put(vcpu); | ||||
| 	kvmppc_booke_vcpu_put(vcpu); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_core_check_processor_compat(void) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
| 	if (strncmp(cur_cpu_spec->platform, "ppc440", 6) == 0) | ||||
| 		r = 0; | ||||
| 	else | ||||
| 		r = -ENOTSUPP; | ||||
| 
 | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	struct kvmppc_44x_tlbe *tlbe = &vcpu_44x->guest_tlb[0]; | ||||
| 	int i; | ||||
| 
 | ||||
| 	tlbe->tid = 0; | ||||
| 	tlbe->word0 = PPC44x_TLB_16M | PPC44x_TLB_VALID; | ||||
| 	tlbe->word1 = 0; | ||||
| 	tlbe->word2 = PPC44x_TLB_SX | PPC44x_TLB_SW | PPC44x_TLB_SR; | ||||
| 
 | ||||
| 	tlbe++; | ||||
| 	tlbe->tid = 0; | ||||
| 	tlbe->word0 = 0xef600000 | PPC44x_TLB_4K | PPC44x_TLB_VALID; | ||||
| 	tlbe->word1 = 0xef600000; | ||||
| 	tlbe->word2 = PPC44x_TLB_SX | PPC44x_TLB_SW | PPC44x_TLB_SR | ||||
| 	              | PPC44x_TLB_I | PPC44x_TLB_G; | ||||
| 
 | ||||
| 	/* Since the guest can directly access the timebase, it must know the
 | ||||
| 	 * real timebase frequency. Accordingly, it must see the state of | ||||
| 	 * CCR1[TCS]. */ | ||||
| 	/* XXX CCR1 doesn't exist on all 440 SoCs. */ | ||||
| 	vcpu->arch.ccr1 = mfspr(SPRN_CCR1); | ||||
| 
 | ||||
| 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->shadow_refs); i++) | ||||
| 		vcpu_44x->shadow_refs[i].gtlb_index = -1; | ||||
| 
 | ||||
| 	vcpu->arch.cpu_type = KVM_CPU_440; | ||||
| 	vcpu->arch.pvr = mfspr(SPRN_PVR); | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| /* 'linear_address' is actually an encoding of AS|PID|EADDR . */ | ||||
| int kvmppc_core_vcpu_translate(struct kvm_vcpu *vcpu, | ||||
|                                struct kvm_translation *tr) | ||||
| { | ||||
| 	int index; | ||||
| 	gva_t eaddr; | ||||
| 	u8 pid; | ||||
| 	u8 as; | ||||
| 
 | ||||
| 	eaddr = tr->linear_address; | ||||
| 	pid = (tr->linear_address >> 32) & 0xff; | ||||
| 	as = (tr->linear_address >> 40) & 0x1; | ||||
| 
 | ||||
| 	index = kvmppc_44x_tlb_index(vcpu, eaddr, pid, as); | ||||
| 	if (index == -1) { | ||||
| 		tr->valid = 0; | ||||
| 		return 0; | ||||
| 	} | ||||
| 
 | ||||
| 	tr->physical_address = kvmppc_mmu_xlate(vcpu, index, eaddr); | ||||
| 	/* XXX what does "writeable" and "usermode" even mean? */ | ||||
| 	tr->valid = 1; | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_core_get_sregs_44x(struct kvm_vcpu *vcpu, | ||||
| 				      struct kvm_sregs *sregs) | ||||
| { | ||||
| 	return kvmppc_get_sregs_ivor(vcpu, sregs); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_core_set_sregs_44x(struct kvm_vcpu *vcpu, | ||||
| 				     struct kvm_sregs *sregs) | ||||
| { | ||||
| 	return kvmppc_set_sregs_ivor(vcpu, sregs); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_get_one_reg_44x(struct kvm_vcpu *vcpu, u64 id, | ||||
| 				  union kvmppc_one_reg *val) | ||||
| { | ||||
| 	return -EINVAL; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_set_one_reg_44x(struct kvm_vcpu *vcpu, u64 id, | ||||
| 				  union kvmppc_one_reg *val) | ||||
| { | ||||
| 	return -EINVAL; | ||||
| } | ||||
| 
 | ||||
| static struct kvm_vcpu *kvmppc_core_vcpu_create_44x(struct kvm *kvm, | ||||
| 						    unsigned int id) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x; | ||||
| 	struct kvm_vcpu *vcpu; | ||||
| 	int err; | ||||
| 
 | ||||
| 	vcpu_44x = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); | ||||
| 	if (!vcpu_44x) { | ||||
| 		err = -ENOMEM; | ||||
| 		goto out; | ||||
| 	} | ||||
| 
 | ||||
| 	vcpu = &vcpu_44x->vcpu; | ||||
| 	err = kvm_vcpu_init(vcpu, kvm, id); | ||||
| 	if (err) | ||||
| 		goto free_vcpu; | ||||
| 
 | ||||
| 	vcpu->arch.shared = (void*)__get_free_page(GFP_KERNEL|__GFP_ZERO); | ||||
| 	if (!vcpu->arch.shared) | ||||
| 		goto uninit_vcpu; | ||||
| 
 | ||||
| 	return vcpu; | ||||
| 
 | ||||
| uninit_vcpu: | ||||
| 	kvm_vcpu_uninit(vcpu); | ||||
| free_vcpu: | ||||
| 	kmem_cache_free(kvm_vcpu_cache, vcpu_44x); | ||||
| out: | ||||
| 	return ERR_PTR(err); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_free_44x(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 
 | ||||
| 	free_page((unsigned long)vcpu->arch.shared); | ||||
| 	kvm_vcpu_uninit(vcpu); | ||||
| 	kmem_cache_free(kvm_vcpu_cache, vcpu_44x); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_core_init_vm_44x(struct kvm *kvm) | ||||
| { | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_destroy_vm_44x(struct kvm *kvm) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| static struct kvmppc_ops kvm_ops_44x = { | ||||
| 	.get_sregs = kvmppc_core_get_sregs_44x, | ||||
| 	.set_sregs = kvmppc_core_set_sregs_44x, | ||||
| 	.get_one_reg = kvmppc_get_one_reg_44x, | ||||
| 	.set_one_reg = kvmppc_set_one_reg_44x, | ||||
| 	.vcpu_load   = kvmppc_core_vcpu_load_44x, | ||||
| 	.vcpu_put    = kvmppc_core_vcpu_put_44x, | ||||
| 	.vcpu_create = kvmppc_core_vcpu_create_44x, | ||||
| 	.vcpu_free   = kvmppc_core_vcpu_free_44x, | ||||
| 	.mmu_destroy  = kvmppc_mmu_destroy_44x, | ||||
| 	.init_vm = kvmppc_core_init_vm_44x, | ||||
| 	.destroy_vm = kvmppc_core_destroy_vm_44x, | ||||
| 	.emulate_op = kvmppc_core_emulate_op_44x, | ||||
| 	.emulate_mtspr = kvmppc_core_emulate_mtspr_44x, | ||||
| 	.emulate_mfspr = kvmppc_core_emulate_mfspr_44x, | ||||
| }; | ||||
| 
 | ||||
| static int __init kvmppc_44x_init(void) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
| 	r = kvmppc_booke_init(); | ||||
| 	if (r) | ||||
| 		goto err_out; | ||||
| 
 | ||||
| 	r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), 0, THIS_MODULE); | ||||
| 	if (r) | ||||
| 		goto err_out; | ||||
| 	kvm_ops_44x.owner = THIS_MODULE; | ||||
| 	kvmppc_pr_ops = &kvm_ops_44x; | ||||
| 
 | ||||
| err_out: | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| static void __exit kvmppc_44x_exit(void) | ||||
| { | ||||
| 	kvmppc_pr_ops = NULL; | ||||
| 	kvmppc_booke_exit(); | ||||
| } | ||||
| 
 | ||||
| module_init(kvmppc_44x_init); | ||||
| module_exit(kvmppc_44x_exit); | ||||
| MODULE_ALIAS_MISCDEV(KVM_MINOR); | ||||
| MODULE_ALIAS("devname:kvm"); | ||||
| @ -1,194 +0,0 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2008 | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #include <asm/kvm_ppc.h> | ||||
| #include <asm/dcr.h> | ||||
| #include <asm/dcr-regs.h> | ||||
| #include <asm/disassemble.h> | ||||
| #include <asm/kvm_44x.h> | ||||
| #include "timing.h" | ||||
| 
 | ||||
| #include "booke.h" | ||||
| #include "44x_tlb.h" | ||||
| 
 | ||||
| #define XOP_MFDCRX  259 | ||||
| #define XOP_MFDCR   323 | ||||
| #define XOP_MTDCRX  387 | ||||
| #define XOP_MTDCR   451 | ||||
| #define XOP_TLBSX   914 | ||||
| #define XOP_ICCCI   966 | ||||
| #define XOP_TLBWE   978 | ||||
| 
 | ||||
| static int emulate_mtdcr(struct kvm_vcpu *vcpu, int rs, int dcrn) | ||||
| { | ||||
| 	/* emulate some access in kernel */ | ||||
| 	switch (dcrn) { | ||||
| 	case DCRN_CPR0_CONFIG_ADDR: | ||||
| 		vcpu->arch.cpr0_cfgaddr = kvmppc_get_gpr(vcpu, rs); | ||||
| 		return EMULATE_DONE; | ||||
| 	default: | ||||
| 		vcpu->run->dcr.dcrn = dcrn; | ||||
| 		vcpu->run->dcr.data = kvmppc_get_gpr(vcpu, rs); | ||||
| 		vcpu->run->dcr.is_write = 1; | ||||
| 		vcpu->arch.dcr_is_write = 1; | ||||
| 		vcpu->arch.dcr_needed = 1; | ||||
| 		kvmppc_account_exit(vcpu, DCR_EXITS); | ||||
| 		return EMULATE_DO_DCR; | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| static int emulate_mfdcr(struct kvm_vcpu *vcpu, int rt, int dcrn) | ||||
| { | ||||
| 	/* The guest may access CPR0 registers to determine the timebase
 | ||||
| 	 * frequency, and it must know the real host frequency because it | ||||
| 	 * can directly access the timebase registers. | ||||
| 	 * | ||||
| 	 * It would be possible to emulate those accesses in userspace, | ||||
| 	 * but userspace can really only figure out the end frequency. | ||||
| 	 * We could decompose that into the factors that compute it, but | ||||
| 	 * that's tricky math, and it's easier to just report the real | ||||
| 	 * CPR0 values. | ||||
| 	 */ | ||||
| 	switch (dcrn) { | ||||
| 	case DCRN_CPR0_CONFIG_ADDR: | ||||
| 		kvmppc_set_gpr(vcpu, rt, vcpu->arch.cpr0_cfgaddr); | ||||
| 		break; | ||||
| 	case DCRN_CPR0_CONFIG_DATA: | ||||
| 		local_irq_disable(); | ||||
| 		mtdcr(DCRN_CPR0_CONFIG_ADDR, | ||||
| 			  vcpu->arch.cpr0_cfgaddr); | ||||
| 		kvmppc_set_gpr(vcpu, rt, | ||||
| 			       mfdcr(DCRN_CPR0_CONFIG_DATA)); | ||||
| 		local_irq_enable(); | ||||
| 		break; | ||||
| 	default: | ||||
| 		vcpu->run->dcr.dcrn = dcrn; | ||||
| 		vcpu->run->dcr.data =  0; | ||||
| 		vcpu->run->dcr.is_write = 0; | ||||
| 		vcpu->arch.dcr_is_write = 0; | ||||
| 		vcpu->arch.io_gpr = rt; | ||||
| 		vcpu->arch.dcr_needed = 1; | ||||
| 		kvmppc_account_exit(vcpu, DCR_EXITS); | ||||
| 		return EMULATE_DO_DCR; | ||||
| 	} | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_core_emulate_op_44x(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 			       unsigned int inst, int *advance) | ||||
| { | ||||
| 	int emulated = EMULATE_DONE; | ||||
| 	int dcrn = get_dcrn(inst); | ||||
| 	int ra = get_ra(inst); | ||||
| 	int rb = get_rb(inst); | ||||
| 	int rc = get_rc(inst); | ||||
| 	int rs = get_rs(inst); | ||||
| 	int rt = get_rt(inst); | ||||
| 	int ws = get_ws(inst); | ||||
| 
 | ||||
| 	switch (get_op(inst)) { | ||||
| 	case 31: | ||||
| 		switch (get_xop(inst)) { | ||||
| 
 | ||||
| 		case XOP_MFDCR: | ||||
| 			emulated = emulate_mfdcr(vcpu, rt, dcrn); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_MFDCRX: | ||||
| 			emulated = emulate_mfdcr(vcpu, rt, | ||||
| 					kvmppc_get_gpr(vcpu, ra)); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_MTDCR: | ||||
| 			emulated = emulate_mtdcr(vcpu, rs, dcrn); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_MTDCRX: | ||||
| 			emulated = emulate_mtdcr(vcpu, rs, | ||||
| 					kvmppc_get_gpr(vcpu, ra)); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_TLBWE: | ||||
| 			emulated = kvmppc_44x_emul_tlbwe(vcpu, ra, rs, ws); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_TLBSX: | ||||
| 			emulated = kvmppc_44x_emul_tlbsx(vcpu, rt, ra, rb, rc); | ||||
| 			break; | ||||
| 
 | ||||
| 		case XOP_ICCCI: | ||||
| 			break; | ||||
| 
 | ||||
| 		default: | ||||
| 			emulated = EMULATE_FAIL; | ||||
| 		} | ||||
| 
 | ||||
| 		break; | ||||
| 
 | ||||
| 	default: | ||||
| 		emulated = EMULATE_FAIL; | ||||
| 	} | ||||
| 
 | ||||
| 	if (emulated == EMULATE_FAIL) | ||||
| 		emulated = kvmppc_booke_emulate_op(run, vcpu, inst, advance); | ||||
| 
 | ||||
| 	return emulated; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_core_emulate_mtspr_44x(struct kvm_vcpu *vcpu, int sprn, ulong spr_val) | ||||
| { | ||||
| 	int emulated = EMULATE_DONE; | ||||
| 
 | ||||
| 	switch (sprn) { | ||||
| 	case SPRN_PID: | ||||
| 		kvmppc_set_pid(vcpu, spr_val); break; | ||||
| 	case SPRN_MMUCR: | ||||
| 		vcpu->arch.mmucr = spr_val; break; | ||||
| 	case SPRN_CCR0: | ||||
| 		vcpu->arch.ccr0 = spr_val; break; | ||||
| 	case SPRN_CCR1: | ||||
| 		vcpu->arch.ccr1 = spr_val; break; | ||||
| 	default: | ||||
| 		emulated = kvmppc_booke_emulate_mtspr(vcpu, sprn, spr_val); | ||||
| 	} | ||||
| 
 | ||||
| 	return emulated; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_core_emulate_mfspr_44x(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val) | ||||
| { | ||||
| 	int emulated = EMULATE_DONE; | ||||
| 
 | ||||
| 	switch (sprn) { | ||||
| 	case SPRN_PID: | ||||
| 		*spr_val = vcpu->arch.pid; break; | ||||
| 	case SPRN_MMUCR: | ||||
| 		*spr_val = vcpu->arch.mmucr; break; | ||||
| 	case SPRN_CCR0: | ||||
| 		*spr_val = vcpu->arch.ccr0; break; | ||||
| 	case SPRN_CCR1: | ||||
| 		*spr_val = vcpu->arch.ccr1; break; | ||||
| 	default: | ||||
| 		emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, spr_val); | ||||
| 	} | ||||
| 
 | ||||
| 	return emulated; | ||||
| } | ||||
| 
 | ||||
| @ -1,528 +0,0 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2007 | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #include <linux/types.h> | ||||
| #include <linux/string.h> | ||||
| #include <linux/kvm.h> | ||||
| #include <linux/kvm_host.h> | ||||
| #include <linux/highmem.h> | ||||
| 
 | ||||
| #include <asm/tlbflush.h> | ||||
| #include <asm/mmu-44x.h> | ||||
| #include <asm/kvm_ppc.h> | ||||
| #include <asm/kvm_44x.h> | ||||
| #include "timing.h" | ||||
| 
 | ||||
| #include "44x_tlb.h" | ||||
| #include "trace.h" | ||||
| 
 | ||||
| #ifndef PPC44x_TLBE_SIZE | ||||
| #define PPC44x_TLBE_SIZE	PPC44x_TLB_4K | ||||
| #endif | ||||
| 
 | ||||
| #define PAGE_SIZE_4K (1<<12) | ||||
| #define PAGE_MASK_4K (~(PAGE_SIZE_4K - 1)) | ||||
| 
 | ||||
| #define PPC44x_TLB_UATTR_MASK \ | ||||
| 	(PPC44x_TLB_U0|PPC44x_TLB_U1|PPC44x_TLB_U2|PPC44x_TLB_U3) | ||||
| #define PPC44x_TLB_USER_PERM_MASK (PPC44x_TLB_UX|PPC44x_TLB_UR|PPC44x_TLB_UW) | ||||
| #define PPC44x_TLB_SUPER_PERM_MASK (PPC44x_TLB_SX|PPC44x_TLB_SR|PPC44x_TLB_SW) | ||||
| 
 | ||||
| #ifdef DEBUG | ||||
| void kvmppc_dump_tlbs(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	struct kvmppc_44x_tlbe *tlbe; | ||||
| 	int i; | ||||
| 
 | ||||
| 	printk("vcpu %d TLB dump:\n", vcpu->vcpu_id); | ||||
| 	printk("| %2s | %3s | %8s | %8s | %8s |\n", | ||||
| 			"nr", "tid", "word0", "word1", "word2"); | ||||
| 
 | ||||
| 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->guest_tlb); i++) { | ||||
| 		tlbe = &vcpu_44x->guest_tlb[i]; | ||||
| 		if (tlbe->word0 & PPC44x_TLB_VALID) | ||||
| 			printk(" G%2d |  %02X | %08X | %08X | %08X |\n", | ||||
| 			       i, tlbe->tid, tlbe->word0, tlbe->word1, | ||||
| 			       tlbe->word2); | ||||
| 	} | ||||
| } | ||||
| #endif | ||||
| 
 | ||||
| static inline void kvmppc_44x_tlbie(unsigned int index) | ||||
| { | ||||
| 	/* 0 <= index < 64, so the V bit is clear and we can use the index as
 | ||||
| 	 * word0. */ | ||||
| 	asm volatile( | ||||
| 		"tlbwe %[index], %[index], 0\n" | ||||
| 	: | ||||
| 	: [index] "r"(index) | ||||
| 	); | ||||
| } | ||||
| 
 | ||||
| static inline void kvmppc_44x_tlbre(unsigned int index, | ||||
|                                     struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	asm volatile( | ||||
| 		"tlbre %[word0], %[index], 0\n" | ||||
| 		"mfspr %[tid], %[sprn_mmucr]\n" | ||||
| 		"andi. %[tid], %[tid], 0xff\n" | ||||
| 		"tlbre %[word1], %[index], 1\n" | ||||
| 		"tlbre %[word2], %[index], 2\n" | ||||
| 		: [word0] "=r"(tlbe->word0), | ||||
| 		  [word1] "=r"(tlbe->word1), | ||||
| 		  [word2] "=r"(tlbe->word2), | ||||
| 		  [tid]   "=r"(tlbe->tid) | ||||
| 		: [index] "r"(index), | ||||
| 		  [sprn_mmucr] "i"(SPRN_MMUCR) | ||||
| 		: "cc" | ||||
| 	); | ||||
| } | ||||
| 
 | ||||
| static inline void kvmppc_44x_tlbwe(unsigned int index, | ||||
|                                     struct kvmppc_44x_tlbe *stlbe) | ||||
| { | ||||
| 	unsigned long tmp; | ||||
| 
 | ||||
| 	asm volatile( | ||||
| 		"mfspr %[tmp], %[sprn_mmucr]\n" | ||||
| 		"rlwimi %[tmp], %[tid], 0, 0xff\n" | ||||
| 		"mtspr %[sprn_mmucr], %[tmp]\n" | ||||
| 		"tlbwe %[word0], %[index], 0\n" | ||||
| 		"tlbwe %[word1], %[index], 1\n" | ||||
| 		"tlbwe %[word2], %[index], 2\n" | ||||
| 		: [tmp]   "=&r"(tmp) | ||||
| 		: [word0] "r"(stlbe->word0), | ||||
| 		  [word1] "r"(stlbe->word1), | ||||
| 		  [word2] "r"(stlbe->word2), | ||||
| 		  [tid]   "r"(stlbe->tid), | ||||
| 		  [index] "r"(index), | ||||
| 		  [sprn_mmucr] "i"(SPRN_MMUCR) | ||||
| 	); | ||||
| } | ||||
| 
 | ||||
| static u32 kvmppc_44x_tlb_shadow_attrib(u32 attrib, int usermode) | ||||
| { | ||||
| 	/* We only care about the guest's permission and user bits. */ | ||||
| 	attrib &= PPC44x_TLB_PERM_MASK|PPC44x_TLB_UATTR_MASK; | ||||
| 
 | ||||
| 	if (!usermode) { | ||||
| 		/* Guest is in supervisor mode, so we need to translate guest
 | ||||
| 		 * supervisor permissions into user permissions. */ | ||||
| 		attrib &= ~PPC44x_TLB_USER_PERM_MASK; | ||||
| 		attrib |= (attrib & PPC44x_TLB_SUPER_PERM_MASK) << 3; | ||||
| 	} | ||||
| 
 | ||||
| 	/* Make sure host can always access this memory. */ | ||||
| 	attrib |= PPC44x_TLB_SX|PPC44x_TLB_SR|PPC44x_TLB_SW; | ||||
| 
 | ||||
| 	/* WIMGE = 0b00100 */ | ||||
| 	attrib |= PPC44x_TLB_M; | ||||
| 
 | ||||
| 	return attrib; | ||||
| } | ||||
| 
 | ||||
| /* Load shadow TLB back into hardware. */ | ||||
| void kvmppc_44x_tlb_load(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	for (i = 0; i <= tlb_44x_hwater; i++) { | ||||
| 		struct kvmppc_44x_tlbe *stlbe = &vcpu_44x->shadow_tlb[i]; | ||||
| 
 | ||||
| 		if (get_tlb_v(stlbe) && get_tlb_ts(stlbe)) | ||||
| 			kvmppc_44x_tlbwe(i, stlbe); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_44x_tlbe_set_modified(struct kvmppc_vcpu_44x *vcpu_44x, | ||||
|                                          unsigned int i) | ||||
| { | ||||
| 	vcpu_44x->shadow_tlb_mod[i] = 1; | ||||
| } | ||||
| 
 | ||||
| /* Save hardware TLB to the vcpu, and invalidate all guest mappings. */ | ||||
| void kvmppc_44x_tlb_put(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	for (i = 0; i <= tlb_44x_hwater; i++) { | ||||
| 		struct kvmppc_44x_tlbe *stlbe = &vcpu_44x->shadow_tlb[i]; | ||||
| 
 | ||||
| 		if (vcpu_44x->shadow_tlb_mod[i]) | ||||
| 			kvmppc_44x_tlbre(i, stlbe); | ||||
| 
 | ||||
| 		if (get_tlb_v(stlbe) && get_tlb_ts(stlbe)) | ||||
| 			kvmppc_44x_tlbie(i); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| 
 | ||||
| /* Search the guest TLB for a matching entry. */ | ||||
| int kvmppc_44x_tlb_index(struct kvm_vcpu *vcpu, gva_t eaddr, unsigned int pid, | ||||
|                          unsigned int as) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	/* XXX Replace loop with fancy data structures. */ | ||||
| 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->guest_tlb); i++) { | ||||
| 		struct kvmppc_44x_tlbe *tlbe = &vcpu_44x->guest_tlb[i]; | ||||
| 		unsigned int tid; | ||||
| 
 | ||||
| 		if (eaddr < get_tlb_eaddr(tlbe)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		if (eaddr > get_tlb_end(tlbe)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		tid = get_tlb_tid(tlbe); | ||||
| 		if (tid && (tid != pid)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		if (!get_tlb_v(tlbe)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		if (get_tlb_ts(tlbe) != as) | ||||
| 			continue; | ||||
| 
 | ||||
| 		return i; | ||||
| 	} | ||||
| 
 | ||||
| 	return -1; | ||||
| } | ||||
| 
 | ||||
| gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index, | ||||
|                        gva_t eaddr) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	struct kvmppc_44x_tlbe *gtlbe = &vcpu_44x->guest_tlb[gtlb_index]; | ||||
| 	unsigned int pgmask = get_tlb_bytes(gtlbe) - 1; | ||||
| 
 | ||||
| 	return get_tlb_raddr(gtlbe) | (eaddr & pgmask); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr) | ||||
| { | ||||
| 	unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS); | ||||
| 
 | ||||
| 	return kvmppc_44x_tlb_index(vcpu, eaddr, vcpu->arch.pid, as); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr) | ||||
| { | ||||
| 	unsigned int as = !!(vcpu->arch.shared->msr & MSR_DS); | ||||
| 
 | ||||
| 	return kvmppc_44x_tlb_index(vcpu, eaddr, vcpu->arch.pid, as); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_44x_shadow_release(struct kvmppc_vcpu_44x *vcpu_44x, | ||||
|                                       unsigned int stlb_index) | ||||
| { | ||||
| 	struct kvmppc_44x_shadow_ref *ref = &vcpu_44x->shadow_refs[stlb_index]; | ||||
| 
 | ||||
| 	if (!ref->page) | ||||
| 		return; | ||||
| 
 | ||||
| 	/* Discard from the TLB. */ | ||||
| 	/* Note: we could actually invalidate a host mapping, if the host overwrote
 | ||||
| 	 * this TLB entry since we inserted a guest mapping. */ | ||||
| 	kvmppc_44x_tlbie(stlb_index); | ||||
| 
 | ||||
| 	/* Now release the page. */ | ||||
| 	if (ref->writeable) | ||||
| 		kvm_release_page_dirty(ref->page); | ||||
| 	else | ||||
| 		kvm_release_page_clean(ref->page); | ||||
| 
 | ||||
| 	ref->page = NULL; | ||||
| 
 | ||||
| 	/* XXX set tlb_44x_index to stlb_index? */ | ||||
| 
 | ||||
| 	trace_kvm_stlb_inval(stlb_index); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_mmu_destroy_44x(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	for (i = 0; i <= tlb_44x_hwater; i++) | ||||
| 		kvmppc_44x_shadow_release(vcpu_44x, i); | ||||
| } | ||||
| 
 | ||||
| /**
 | ||||
|  * kvmppc_mmu_map -- create a host mapping for guest memory | ||||
|  * | ||||
|  * If the guest wanted a larger page than the host supports, only the first | ||||
|  * host page is mapped here and the rest are demand faulted. | ||||
|  * | ||||
|  * If the guest wanted a smaller page than the host page size, we map only the | ||||
|  * guest-size page (i.e. not a full host page mapping). | ||||
|  * | ||||
|  * Caller must ensure that the specified guest TLB entry is safe to insert into | ||||
|  * the shadow TLB. | ||||
|  */ | ||||
| void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr, | ||||
|                     unsigned int gtlb_index) | ||||
| { | ||||
| 	struct kvmppc_44x_tlbe stlbe; | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	struct kvmppc_44x_tlbe *gtlbe = &vcpu_44x->guest_tlb[gtlb_index]; | ||||
| 	struct kvmppc_44x_shadow_ref *ref; | ||||
| 	struct page *new_page; | ||||
| 	hpa_t hpaddr; | ||||
| 	gfn_t gfn; | ||||
| 	u32 asid = gtlbe->tid; | ||||
| 	u32 flags = gtlbe->word2; | ||||
| 	u32 max_bytes = get_tlb_bytes(gtlbe); | ||||
| 	unsigned int victim; | ||||
| 
 | ||||
| 	/* Select TLB entry to clobber. Indirectly guard against races with the TLB
 | ||||
| 	 * miss handler by disabling interrupts. */ | ||||
| 	local_irq_disable(); | ||||
| 	victim = ++tlb_44x_index; | ||||
| 	if (victim > tlb_44x_hwater) | ||||
| 		victim = 0; | ||||
| 	tlb_44x_index = victim; | ||||
| 	local_irq_enable(); | ||||
| 
 | ||||
| 	/* Get reference to new page. */ | ||||
| 	gfn = gpaddr >> PAGE_SHIFT; | ||||
| 	new_page = gfn_to_page(vcpu->kvm, gfn); | ||||
| 	if (is_error_page(new_page)) { | ||||
| 		printk(KERN_ERR "Couldn't get guest page for gfn %llx!\n", | ||||
| 			(unsigned long long)gfn); | ||||
| 		return; | ||||
| 	} | ||||
| 	hpaddr = page_to_phys(new_page); | ||||
| 
 | ||||
| 	/* Invalidate any previous shadow mappings. */ | ||||
| 	kvmppc_44x_shadow_release(vcpu_44x, victim); | ||||
| 
 | ||||
| 	/* XXX Make sure (va, size) doesn't overlap any other
 | ||||
| 	 * entries. 440x6 user manual says the result would be | ||||
| 	 * "undefined." */ | ||||
| 
 | ||||
| 	/* XXX what about AS? */ | ||||
| 
 | ||||
| 	/* Force TS=1 for all guest mappings. */ | ||||
| 	stlbe.word0 = PPC44x_TLB_VALID | PPC44x_TLB_TS; | ||||
| 
 | ||||
| 	if (max_bytes >= PAGE_SIZE) { | ||||
| 		/* Guest mapping is larger than or equal to host page size. We can use
 | ||||
| 		 * a "native" host mapping. */ | ||||
| 		stlbe.word0 |= (gvaddr & PAGE_MASK) | PPC44x_TLBE_SIZE; | ||||
| 	} else { | ||||
| 		/* Guest mapping is smaller than host page size. We must restrict the
 | ||||
| 		 * size of the mapping to be at most the smaller of the two, but for | ||||
| 		 * simplicity we fall back to a 4K mapping (this is probably what the | ||||
| 		 * guest is using anyways). */ | ||||
| 		stlbe.word0 |= (gvaddr & PAGE_MASK_4K) | PPC44x_TLB_4K; | ||||
| 
 | ||||
| 		/* 'hpaddr' is a host page, which is larger than the mapping we're
 | ||||
| 		 * inserting here. To compensate, we must add the in-page offset to the | ||||
| 		 * sub-page. */ | ||||
| 		hpaddr |= gpaddr & (PAGE_MASK ^ PAGE_MASK_4K); | ||||
| 	} | ||||
| 
 | ||||
| 	stlbe.word1 = (hpaddr & 0xfffffc00) | ((hpaddr >> 32) & 0xf); | ||||
| 	stlbe.word2 = kvmppc_44x_tlb_shadow_attrib(flags, | ||||
| 	                                            vcpu->arch.shared->msr & MSR_PR); | ||||
| 	stlbe.tid = !(asid & 0xff); | ||||
| 
 | ||||
| 	/* Keep track of the reference so we can properly release it later. */ | ||||
| 	ref = &vcpu_44x->shadow_refs[victim]; | ||||
| 	ref->page = new_page; | ||||
| 	ref->gtlb_index = gtlb_index; | ||||
| 	ref->writeable = !!(stlbe.word2 & PPC44x_TLB_UW); | ||||
| 	ref->tid = stlbe.tid; | ||||
| 
 | ||||
| 	/* Insert shadow mapping into hardware TLB. */ | ||||
| 	kvmppc_44x_tlbe_set_modified(vcpu_44x, victim); | ||||
| 	kvmppc_44x_tlbwe(victim, &stlbe); | ||||
| 	trace_kvm_stlb_write(victim, stlbe.tid, stlbe.word0, stlbe.word1, | ||||
| 			     stlbe.word2); | ||||
| } | ||||
| 
 | ||||
| /* For a particular guest TLB entry, invalidate the corresponding host TLB
 | ||||
|  * mappings and release the host pages. */ | ||||
| static void kvmppc_44x_invalidate(struct kvm_vcpu *vcpu, | ||||
|                                   unsigned int gtlb_index) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->shadow_refs); i++) { | ||||
| 		struct kvmppc_44x_shadow_ref *ref = &vcpu_44x->shadow_refs[i]; | ||||
| 		if (ref->gtlb_index == gtlb_index) | ||||
| 			kvmppc_44x_shadow_release(vcpu_44x, i); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| void kvmppc_mmu_msr_notify(struct kvm_vcpu *vcpu, u32 old_msr) | ||||
| { | ||||
| 	int usermode = vcpu->arch.shared->msr & MSR_PR; | ||||
| 
 | ||||
| 	vcpu->arch.shadow_pid = !usermode; | ||||
| } | ||||
| 
 | ||||
| void kvmppc_set_pid(struct kvm_vcpu *vcpu, u32 new_pid) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	int i; | ||||
| 
 | ||||
| 	if (unlikely(vcpu->arch.pid == new_pid)) | ||||
| 		return; | ||||
| 
 | ||||
| 	vcpu->arch.pid = new_pid; | ||||
| 
 | ||||
| 	/* Guest userspace runs with TID=0 mappings and PID=0, to make sure it
 | ||||
| 	 * can't access guest kernel mappings (TID=1). When we switch to a new | ||||
| 	 * guest PID, which will also use host PID=0, we must discard the old guest | ||||
| 	 * userspace mappings. */ | ||||
| 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->shadow_refs); i++) { | ||||
| 		struct kvmppc_44x_shadow_ref *ref = &vcpu_44x->shadow_refs[i]; | ||||
| 
 | ||||
| 		if (ref->tid == 0) | ||||
| 			kvmppc_44x_shadow_release(vcpu_44x, i); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| static int tlbe_is_host_safe(const struct kvm_vcpu *vcpu, | ||||
|                              const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	gpa_t gpa; | ||||
| 
 | ||||
| 	if (!get_tlb_v(tlbe)) | ||||
| 		return 0; | ||||
| 
 | ||||
| 	/* Does it match current guest AS? */ | ||||
| 	/* XXX what about IS != DS? */ | ||||
| 	if (get_tlb_ts(tlbe) != !!(vcpu->arch.shared->msr & MSR_IS)) | ||||
| 		return 0; | ||||
| 
 | ||||
| 	gpa = get_tlb_raddr(tlbe); | ||||
| 	if (!gfn_to_memslot(vcpu->kvm, gpa >> PAGE_SHIFT)) | ||||
| 		/* Mapping is not for RAM. */ | ||||
| 		return 0; | ||||
| 
 | ||||
| 	return 1; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) | ||||
| { | ||||
| 	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); | ||||
| 	struct kvmppc_44x_tlbe *tlbe; | ||||
| 	unsigned int gtlb_index; | ||||
| 	int idx; | ||||
| 
 | ||||
| 	gtlb_index = kvmppc_get_gpr(vcpu, ra); | ||||
| 	if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { | ||||
| 		printk("%s: index %d\n", __func__, gtlb_index); | ||||
| 		kvmppc_dump_vcpu(vcpu); | ||||
| 		return EMULATE_FAIL; | ||||
| 	} | ||||
| 
 | ||||
| 	tlbe = &vcpu_44x->guest_tlb[gtlb_index]; | ||||
| 
 | ||||
| 	/* Invalidate shadow mappings for the about-to-be-clobbered TLB entry. */ | ||||
| 	if (tlbe->word0 & PPC44x_TLB_VALID) | ||||
| 		kvmppc_44x_invalidate(vcpu, gtlb_index); | ||||
| 
 | ||||
| 	switch (ws) { | ||||
| 	case PPC44x_TLB_PAGEID: | ||||
| 		tlbe->tid = get_mmucr_stid(vcpu); | ||||
| 		tlbe->word0 = kvmppc_get_gpr(vcpu, rs); | ||||
| 		break; | ||||
| 
 | ||||
| 	case PPC44x_TLB_XLAT: | ||||
| 		tlbe->word1 = kvmppc_get_gpr(vcpu, rs); | ||||
| 		break; | ||||
| 
 | ||||
| 	case PPC44x_TLB_ATTRIB: | ||||
| 		tlbe->word2 = kvmppc_get_gpr(vcpu, rs); | ||||
| 		break; | ||||
| 
 | ||||
| 	default: | ||||
| 		return EMULATE_FAIL; | ||||
| 	} | ||||
| 
 | ||||
| 	idx = srcu_read_lock(&vcpu->kvm->srcu); | ||||
| 
 | ||||
| 	if (tlbe_is_host_safe(vcpu, tlbe)) { | ||||
| 		gva_t eaddr; | ||||
| 		gpa_t gpaddr; | ||||
| 		u32 bytes; | ||||
| 
 | ||||
| 		eaddr = get_tlb_eaddr(tlbe); | ||||
| 		gpaddr = get_tlb_raddr(tlbe); | ||||
| 
 | ||||
| 		/* Use the advertised page size to mask effective and real addrs. */ | ||||
| 		bytes = get_tlb_bytes(tlbe); | ||||
| 		eaddr &= ~(bytes - 1); | ||||
| 		gpaddr &= ~(bytes - 1); | ||||
| 
 | ||||
| 		kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index); | ||||
| 	} | ||||
| 
 | ||||
| 	srcu_read_unlock(&vcpu->kvm->srcu, idx); | ||||
| 
 | ||||
| 	trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1, | ||||
| 			     tlbe->word2); | ||||
| 
 | ||||
| 	kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_44x_emul_tlbsx(struct kvm_vcpu *vcpu, u8 rt, u8 ra, u8 rb, u8 rc) | ||||
| { | ||||
| 	u32 ea; | ||||
| 	int gtlb_index; | ||||
| 	unsigned int as = get_mmucr_sts(vcpu); | ||||
| 	unsigned int pid = get_mmucr_stid(vcpu); | ||||
| 
 | ||||
| 	ea = kvmppc_get_gpr(vcpu, rb); | ||||
| 	if (ra) | ||||
| 		ea += kvmppc_get_gpr(vcpu, ra); | ||||
| 
 | ||||
| 	gtlb_index = kvmppc_44x_tlb_index(vcpu, ea, pid, as); | ||||
| 	if (rc) { | ||||
| 		u32 cr = kvmppc_get_cr(vcpu); | ||||
| 
 | ||||
| 		if (gtlb_index < 0) | ||||
| 			kvmppc_set_cr(vcpu, cr & ~0x20000000); | ||||
| 		else | ||||
| 			kvmppc_set_cr(vcpu, cr | 0x20000000); | ||||
| 	} | ||||
| 	kvmppc_set_gpr(vcpu, rt, gtlb_index); | ||||
| 
 | ||||
| 	kvmppc_set_exit_type(vcpu, EMULATED_TLBSX_EXITS); | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| @ -1,86 +0,0 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2007 | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #ifndef __KVM_POWERPC_TLB_H__ | ||||
| #define __KVM_POWERPC_TLB_H__ | ||||
| 
 | ||||
| #include <linux/kvm_host.h> | ||||
| #include <asm/mmu-44x.h> | ||||
| 
 | ||||
| extern int kvmppc_44x_tlb_index(struct kvm_vcpu *vcpu, gva_t eaddr, | ||||
|                                 unsigned int pid, unsigned int as); | ||||
| 
 | ||||
| extern int kvmppc_44x_emul_tlbsx(struct kvm_vcpu *vcpu, u8 rt, u8 ra, u8 rb, | ||||
|                                  u8 rc); | ||||
| extern int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws); | ||||
| 
 | ||||
| /* TLB helper functions */ | ||||
| static inline unsigned int get_tlb_size(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return (tlbe->word0 >> 4) & 0xf; | ||||
| } | ||||
| 
 | ||||
| static inline gva_t get_tlb_eaddr(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return tlbe->word0 & 0xfffffc00; | ||||
| } | ||||
| 
 | ||||
| static inline gva_t get_tlb_bytes(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	unsigned int pgsize = get_tlb_size(tlbe); | ||||
| 	return 1 << 10 << (pgsize << 1); | ||||
| } | ||||
| 
 | ||||
| static inline gva_t get_tlb_end(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return get_tlb_eaddr(tlbe) + get_tlb_bytes(tlbe) - 1; | ||||
| } | ||||
| 
 | ||||
| static inline u64 get_tlb_raddr(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	u64 word1 = tlbe->word1; | ||||
| 	return ((word1 & 0xf) << 32) | (word1 & 0xfffffc00); | ||||
| } | ||||
| 
 | ||||
| static inline unsigned int get_tlb_tid(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return tlbe->tid & 0xff; | ||||
| } | ||||
| 
 | ||||
| static inline unsigned int get_tlb_ts(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return (tlbe->word0 >> 8) & 0x1; | ||||
| } | ||||
| 
 | ||||
| static inline unsigned int get_tlb_v(const struct kvmppc_44x_tlbe *tlbe) | ||||
| { | ||||
| 	return (tlbe->word0 >> 9) & 0x1; | ||||
| } | ||||
| 
 | ||||
| static inline unsigned int get_mmucr_stid(const struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return vcpu->arch.mmucr & 0xff; | ||||
| } | ||||
| 
 | ||||
| static inline unsigned int get_mmucr_sts(const struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	return (vcpu->arch.mmucr >> 16) & 0x1; | ||||
| } | ||||
| 
 | ||||
| #endif /* __KVM_POWERPC_TLB_H__ */ | ||||
| @ -75,7 +75,6 @@ config KVM_BOOK3S_64 | ||||
| config KVM_BOOK3S_64_HV | ||||
| 	tristate "KVM support for POWER7 and PPC970 using hypervisor mode in host" | ||||
| 	depends on KVM_BOOK3S_64 | ||||
| 	depends on !CPU_LITTLE_ENDIAN | ||||
| 	select KVM_BOOK3S_HV_POSSIBLE | ||||
| 	select MMU_NOTIFIER | ||||
| 	select CMA | ||||
| @ -113,23 +112,9 @@ config KVM_BOOK3S_64_PR | ||||
| config KVM_BOOKE_HV | ||||
| 	bool | ||||
| 
 | ||||
| config KVM_440 | ||||
| 	bool "KVM support for PowerPC 440 processors" | ||||
| 	depends on 44x | ||||
| 	select KVM | ||||
| 	select KVM_MMIO | ||||
| 	---help--- | ||||
| 	  Support running unmodified 440 guest kernels in virtual machines on | ||||
| 	  440 host processors. | ||||
| 
 | ||||
| 	  This module provides access to the hardware capabilities through | ||||
| 	  a character device node named /dev/kvm. | ||||
| 
 | ||||
| 	  If unsure, say N. | ||||
| 
 | ||||
| config KVM_EXIT_TIMING | ||||
| 	bool "Detailed exit timing" | ||||
| 	depends on KVM_440 || KVM_E500V2 || KVM_E500MC | ||||
| 	depends on KVM_E500V2 || KVM_E500MC | ||||
| 	---help--- | ||||
| 	  Calculate elapsed time for every exit/enter cycle. A per-vcpu | ||||
| 	  report is available in debugfs kvm/vm#_vcpu#_timing. | ||||
|  | ||||
| @ -10,27 +10,17 @@ KVM := ../../../virt/kvm | ||||
| common-objs-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
 | ||||
| 		$(KVM)/eventfd.o | ||||
| 
 | ||||
| CFLAGS_44x_tlb.o  := -I. | ||||
| CFLAGS_e500_mmu.o := -I. | ||||
| CFLAGS_e500_mmu_host.o := -I. | ||||
| CFLAGS_emulate.o  := -I. | ||||
| CFLAGS_emulate_loadstore.o  := -I. | ||||
| 
 | ||||
| common-objs-y += powerpc.o emulate.o | ||||
| common-objs-y += powerpc.o emulate.o emulate_loadstore.o | ||||
| obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o | ||||
| obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o | ||||
| 
 | ||||
| AFLAGS_booke_interrupts.o := -I$(obj) | ||||
| 
 | ||||
| kvm-440-objs := \
 | ||||
| 	$(common-objs-y) \
 | ||||
| 	booke.o \
 | ||||
| 	booke_emulate.o \
 | ||||
| 	booke_interrupts.o \
 | ||||
| 	44x.o \
 | ||||
| 	44x_tlb.o \
 | ||||
| 	44x_emulate.o | ||||
| kvm-objs-$(CONFIG_KVM_440) := $(kvm-440-objs) | ||||
| 
 | ||||
| kvm-e500-objs := \
 | ||||
| 	$(common-objs-y) \
 | ||||
| 	booke.o \
 | ||||
| @ -58,6 +48,7 @@ kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) := \ | ||||
| 
 | ||||
| kvm-pr-y := \
 | ||||
| 	fpu.o \
 | ||||
| 	emulate.o \
 | ||||
| 	book3s_paired_singles.o \
 | ||||
| 	book3s_pr.o \
 | ||||
| 	book3s_pr_papr.o \
 | ||||
| @ -101,7 +92,7 @@ kvm-book3s_64-module-objs += \ | ||||
| 	$(KVM)/kvm_main.o \
 | ||||
| 	$(KVM)/eventfd.o \
 | ||||
| 	powerpc.o \
 | ||||
| 	emulate.o \
 | ||||
| 	emulate_loadstore.o \
 | ||||
| 	book3s.o \
 | ||||
| 	book3s_64_vio.o \
 | ||||
| 	book3s_rtas.o \
 | ||||
| @ -127,7 +118,6 @@ kvm-objs-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o | ||||
| 
 | ||||
| kvm-objs := $(kvm-objs-m) $(kvm-objs-y) | ||||
| 
 | ||||
| obj-$(CONFIG_KVM_440) += kvm.o | ||||
| obj-$(CONFIG_KVM_E500V2) += kvm.o | ||||
| obj-$(CONFIG_KVM_E500MC) += kvm.o | ||||
| obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o | ||||
|  | ||||
| @ -72,6 +72,17 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| void kvmppc_unfixup_split_real(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	if (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) { | ||||
| 		ulong pc = kvmppc_get_pc(vcpu); | ||||
| 		if ((pc & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS) | ||||
| 			kvmppc_set_pc(vcpu, pc & ~SPLIT_HACK_MASK); | ||||
| 		vcpu->arch.hflags &= ~BOOK3S_HFLAG_SPLIT_HACK; | ||||
| 	} | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_unfixup_split_real); | ||||
| 
 | ||||
| static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	if (!is_kvmppc_hv_enabled(vcpu->kvm)) | ||||
| @ -118,6 +129,7 @@ static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags) | ||||
| { | ||||
| 	kvmppc_unfixup_split_real(vcpu); | ||||
| 	kvmppc_set_srr0(vcpu, kvmppc_get_pc(vcpu)); | ||||
| 	kvmppc_set_srr1(vcpu, kvmppc_get_msr(vcpu) | flags); | ||||
| 	kvmppc_set_pc(vcpu, kvmppc_interrupt_offset(vcpu) + vec); | ||||
| @ -218,6 +230,23 @@ void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu) | ||||
| 	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL_LEVEL); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, ulong dar, | ||||
| 				    ulong flags) | ||||
| { | ||||
| 	kvmppc_set_dar(vcpu, dar); | ||||
| 	kvmppc_set_dsisr(vcpu, flags); | ||||
| 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong flags) | ||||
| { | ||||
| 	u64 msr = kvmppc_get_msr(vcpu); | ||||
| 	msr &= ~(SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT); | ||||
| 	msr |= flags & (SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT); | ||||
| 	kvmppc_set_msr_fast(vcpu, msr); | ||||
| 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority) | ||||
| { | ||||
| 	int deliver = 1; | ||||
| @ -342,18 +371,18 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_core_prepare_to_enter); | ||||
| 
 | ||||
| pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool writing, | ||||
| pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, bool writing, | ||||
| 			bool *writable) | ||||
| { | ||||
| 	ulong mp_pa = vcpu->arch.magic_page_pa; | ||||
| 	ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM; | ||||
| 	gfn_t gfn = gpa >> PAGE_SHIFT; | ||||
| 
 | ||||
| 	if (!(kvmppc_get_msr(vcpu) & MSR_SF)) | ||||
| 		mp_pa = (uint32_t)mp_pa; | ||||
| 
 | ||||
| 	/* Magic page override */ | ||||
| 	if (unlikely(mp_pa) && | ||||
| 	    unlikely(((gfn << PAGE_SHIFT) & KVM_PAM) == | ||||
| 		     ((mp_pa & PAGE_MASK) & KVM_PAM))) { | ||||
| 	gpa &= ~0xFFFULL; | ||||
| 	if (unlikely(mp_pa) && unlikely((gpa & KVM_PAM) == mp_pa)) { | ||||
| 		ulong shared_page = ((ulong)vcpu->arch.shared) & PAGE_MASK; | ||||
| 		pfn_t pfn; | ||||
| 
 | ||||
| @ -366,11 +395,13 @@ pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool writing, | ||||
| 
 | ||||
| 	return gfn_to_pfn_prot(vcpu->kvm, gfn, writing, writable); | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_gfn_to_pfn); | ||||
| EXPORT_SYMBOL_GPL(kvmppc_gpa_to_pfn); | ||||
| 
 | ||||
| static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data, | ||||
| 			bool iswrite, struct kvmppc_pte *pte) | ||||
| int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid, | ||||
| 		 enum xlate_readwrite xlrw, struct kvmppc_pte *pte) | ||||
| { | ||||
| 	bool data = (xlid == XLATE_DATA); | ||||
| 	bool iswrite = (xlrw == XLATE_WRITE); | ||||
| 	int relocated = (kvmppc_get_msr(vcpu) & (data ? MSR_DR : MSR_IR)); | ||||
| 	int r; | ||||
| 
 | ||||
| @ -384,88 +415,34 @@ static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data, | ||||
| 		pte->may_write = true; | ||||
| 		pte->may_execute = true; | ||||
| 		r = 0; | ||||
| 
 | ||||
| 		if ((kvmppc_get_msr(vcpu) & (MSR_IR | MSR_DR)) == MSR_DR && | ||||
| 		    !data) { | ||||
| 			if ((vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) && | ||||
| 			    ((eaddr & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)) | ||||
| 			pte->raddr &= ~SPLIT_HACK_MASK; | ||||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| static hva_t kvmppc_bad_hva(void) | ||||
| int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, | ||||
| 					 u32 *inst) | ||||
| { | ||||
| 	return PAGE_OFFSET; | ||||
| 	ulong pc = kvmppc_get_pc(vcpu); | ||||
| 	int r; | ||||
| 
 | ||||
| 	if (type == INST_SC) | ||||
| 		pc -= 4; | ||||
| 
 | ||||
| 	r = kvmppc_ld(vcpu, &pc, sizeof(u32), inst, false); | ||||
| 	if (r == EMULATE_DONE) | ||||
| 		return r; | ||||
| 	else | ||||
| 		return EMULATE_AGAIN; | ||||
| } | ||||
| 
 | ||||
| static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte, | ||||
| 			       bool read) | ||||
| { | ||||
| 	hva_t hpage; | ||||
| 
 | ||||
| 	if (read && !pte->may_read) | ||||
| 		goto err; | ||||
| 
 | ||||
| 	if (!read && !pte->may_write) | ||||
| 		goto err; | ||||
| 
 | ||||
| 	hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT); | ||||
| 	if (kvm_is_error_hva(hpage)) | ||||
| 		goto err; | ||||
| 
 | ||||
| 	return hpage | (pte->raddr & ~PAGE_MASK); | ||||
| err: | ||||
| 	return kvmppc_bad_hva(); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 	      bool data) | ||||
| { | ||||
| 	struct kvmppc_pte pte; | ||||
| 
 | ||||
| 	vcpu->stat.st++; | ||||
| 
 | ||||
| 	if (kvmppc_xlate(vcpu, *eaddr, data, true, &pte)) | ||||
| 		return -ENOENT; | ||||
| 
 | ||||
| 	*eaddr = pte.raddr; | ||||
| 
 | ||||
| 	if (!pte.may_write) | ||||
| 		return -EPERM; | ||||
| 
 | ||||
| 	if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size)) | ||||
| 		return EMULATE_DO_MMIO; | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_st); | ||||
| 
 | ||||
| int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 		      bool data) | ||||
| { | ||||
| 	struct kvmppc_pte pte; | ||||
| 	hva_t hva = *eaddr; | ||||
| 
 | ||||
| 	vcpu->stat.ld++; | ||||
| 
 | ||||
| 	if (kvmppc_xlate(vcpu, *eaddr, data, false, &pte)) | ||||
| 		goto nopte; | ||||
| 
 | ||||
| 	*eaddr = pte.raddr; | ||||
| 
 | ||||
| 	hva = kvmppc_pte_to_hva(vcpu, &pte, true); | ||||
| 	if (kvm_is_error_hva(hva)) | ||||
| 		goto mmio; | ||||
| 
 | ||||
| 	if (copy_from_user(ptr, (void __user *)hva, size)) { | ||||
| 		printk(KERN_INFO "kvmppc_ld at 0x%lx failed\n", hva); | ||||
| 		goto mmio; | ||||
| 	} | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| 
 | ||||
| nopte: | ||||
| 	return -ENOENT; | ||||
| mmio: | ||||
| 	return EMULATE_DO_MMIO; | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_ld); | ||||
| EXPORT_SYMBOL_GPL(kvmppc_load_last_inst); | ||||
| 
 | ||||
| int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| @ -646,6 +623,12 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg) | ||||
| 		case KVM_REG_PPC_BESCR: | ||||
| 			val = get_reg_val(reg->id, vcpu->arch.bescr); | ||||
| 			break; | ||||
| 		case KVM_REG_PPC_VTB: | ||||
| 			val = get_reg_val(reg->id, vcpu->arch.vtb); | ||||
| 			break; | ||||
| 		case KVM_REG_PPC_IC: | ||||
| 			val = get_reg_val(reg->id, vcpu->arch.ic); | ||||
| 			break; | ||||
| 		default: | ||||
| 			r = -EINVAL; | ||||
| 			break; | ||||
| @ -750,6 +733,12 @@ int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg) | ||||
| 		case KVM_REG_PPC_BESCR: | ||||
| 			vcpu->arch.bescr = set_reg_val(reg->id, val); | ||||
| 			break; | ||||
| 		case KVM_REG_PPC_VTB: | ||||
| 			vcpu->arch.vtb = set_reg_val(reg->id, val); | ||||
| 			break; | ||||
| 		case KVM_REG_PPC_IC: | ||||
| 			vcpu->arch.ic = set_reg_val(reg->id, val); | ||||
| 			break; | ||||
| 		default: | ||||
| 			r = -EINVAL; | ||||
| 			break; | ||||
| @ -913,6 +902,11 @@ int kvmppc_core_check_processor_compat(void) | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hcall) | ||||
| { | ||||
| 	return kvm->arch.kvm_ops->hcall_implemented(hcall); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_book3s_init(void) | ||||
| { | ||||
| 	int r; | ||||
|  | ||||
| @ -335,7 +335,7 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, | ||||
| 	if (r < 0) | ||||
| 		r = kvmppc_mmu_book3s_32_xlate_pte(vcpu, eaddr, pte, | ||||
| 						   data, iswrite, true); | ||||
| 	if (r < 0) | ||||
| 	if (r == -ENOENT) | ||||
| 		r = kvmppc_mmu_book3s_32_xlate_pte(vcpu, eaddr, pte, | ||||
| 						   data, iswrite, false); | ||||
| 
 | ||||
|  | ||||
| @ -156,11 +156,10 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, | ||||
| 	bool writable; | ||||
| 
 | ||||
| 	/* Get host physical address for gpa */ | ||||
| 	hpaddr = kvmppc_gfn_to_pfn(vcpu, orig_pte->raddr >> PAGE_SHIFT, | ||||
| 				   iswrite, &writable); | ||||
| 	hpaddr = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); | ||||
| 	if (is_error_noslot_pfn(hpaddr)) { | ||||
| 		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", | ||||
| 				 orig_pte->eaddr); | ||||
| 		printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", | ||||
| 				 orig_pte->raddr); | ||||
| 		r = -EINVAL; | ||||
| 		goto out; | ||||
| 	} | ||||
|  | ||||
| @ -104,9 +104,10 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, | ||||
| 	smp_rmb(); | ||||
| 
 | ||||
| 	/* Get host physical address for gpa */ | ||||
| 	pfn = kvmppc_gfn_to_pfn(vcpu, gfn, iswrite, &writable); | ||||
| 	pfn = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); | ||||
| 	if (is_error_noslot_pfn(pfn)) { | ||||
| 		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", gfn); | ||||
| 		printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", | ||||
| 		       orig_pte->raddr); | ||||
| 		r = -EINVAL; | ||||
| 		goto out; | ||||
| 	} | ||||
|  | ||||
| @ -450,7 +450,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, | ||||
| 	unsigned long slb_v; | ||||
| 	unsigned long pp, key; | ||||
| 	unsigned long v, gr; | ||||
| 	unsigned long *hptep; | ||||
| 	__be64 *hptep; | ||||
| 	int index; | ||||
| 	int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR); | ||||
| 
 | ||||
| @ -473,13 +473,13 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, | ||||
| 		preempt_enable(); | ||||
| 		return -ENOENT; | ||||
| 	} | ||||
| 	hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	v = hptep[0] & ~HPTE_V_HVLOCK; | ||||
| 	hptep = (__be64 *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	v = be64_to_cpu(hptep[0]) & ~HPTE_V_HVLOCK; | ||||
| 	gr = kvm->arch.revmap[index].guest_rpte; | ||||
| 
 | ||||
| 	/* Unlock the HPTE */ | ||||
| 	asm volatile("lwsync" : : : "memory"); | ||||
| 	hptep[0] = v; | ||||
| 	hptep[0] = cpu_to_be64(v); | ||||
| 	preempt_enable(); | ||||
| 
 | ||||
| 	gpte->eaddr = eaddr; | ||||
| @ -530,21 +530,14 @@ static int instruction_is_store(unsigned int instr) | ||||
| static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 				  unsigned long gpa, gva_t ea, int is_store) | ||||
| { | ||||
| 	int ret; | ||||
| 	u32 last_inst; | ||||
| 	unsigned long srr0 = kvmppc_get_pc(vcpu); | ||||
| 
 | ||||
| 	/* We try to load the last instruction.  We don't let
 | ||||
| 	 * emulate_instruction do it as it doesn't check what | ||||
| 	 * kvmppc_ld returns. | ||||
| 	/*
 | ||||
| 	 * If we fail, we just return to the guest and try executing it again. | ||||
| 	 */ | ||||
| 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) { | ||||
| 		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false); | ||||
| 		if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED) | ||||
| 			return RESUME_GUEST; | ||||
| 		vcpu->arch.last_inst = last_inst; | ||||
| 	} | ||||
| 	if (kvmppc_get_last_inst(vcpu, INST_GENERIC, &last_inst) != | ||||
| 		EMULATE_DONE) | ||||
| 		return RESUME_GUEST; | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * WARNING: We do not know for sure whether the instruction we just | ||||
| @ -558,7 +551,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	 * we just return and retry the instruction. | ||||
| 	 */ | ||||
| 
 | ||||
| 	if (instruction_is_store(kvmppc_get_last_inst(vcpu)) != !!is_store) | ||||
| 	if (instruction_is_store(last_inst) != !!is_store) | ||||
| 		return RESUME_GUEST; | ||||
| 
 | ||||
| 	/*
 | ||||
| @ -583,7 +576,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 				unsigned long ea, unsigned long dsisr) | ||||
| { | ||||
| 	struct kvm *kvm = vcpu->kvm; | ||||
| 	unsigned long *hptep, hpte[3], r; | ||||
| 	unsigned long hpte[3], r; | ||||
| 	__be64 *hptep; | ||||
| 	unsigned long mmu_seq, psize, pte_size; | ||||
| 	unsigned long gpa_base, gfn_base; | ||||
| 	unsigned long gpa, gfn, hva, pfn; | ||||
| @ -606,16 +600,16 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	if (ea != vcpu->arch.pgfault_addr) | ||||
| 		return RESUME_GUEST; | ||||
| 	index = vcpu->arch.pgfault_index; | ||||
| 	hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	hptep = (__be64 *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	rev = &kvm->arch.revmap[index]; | ||||
| 	preempt_disable(); | ||||
| 	while (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) | ||||
| 		cpu_relax(); | ||||
| 	hpte[0] = hptep[0] & ~HPTE_V_HVLOCK; | ||||
| 	hpte[1] = hptep[1]; | ||||
| 	hpte[0] = be64_to_cpu(hptep[0]) & ~HPTE_V_HVLOCK; | ||||
| 	hpte[1] = be64_to_cpu(hptep[1]); | ||||
| 	hpte[2] = r = rev->guest_rpte; | ||||
| 	asm volatile("lwsync" : : : "memory"); | ||||
| 	hptep[0] = hpte[0]; | ||||
| 	hptep[0] = cpu_to_be64(hpte[0]); | ||||
| 	preempt_enable(); | ||||
| 
 | ||||
| 	if (hpte[0] != vcpu->arch.pgfault_hpte[0] || | ||||
| @ -731,8 +725,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	preempt_disable(); | ||||
| 	while (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) | ||||
| 		cpu_relax(); | ||||
| 	if ((hptep[0] & ~HPTE_V_HVLOCK) != hpte[0] || hptep[1] != hpte[1] || | ||||
| 	    rev->guest_rpte != hpte[2]) | ||||
| 	if ((be64_to_cpu(hptep[0]) & ~HPTE_V_HVLOCK) != hpte[0] || | ||||
| 		be64_to_cpu(hptep[1]) != hpte[1] || | ||||
| 		rev->guest_rpte != hpte[2]) | ||||
| 		/* HPTE has been changed under us; let the guest retry */ | ||||
| 		goto out_unlock; | ||||
| 	hpte[0] = (hpte[0] & ~HPTE_V_ABSENT) | HPTE_V_VALID; | ||||
| @ -752,20 +747,20 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	rcbits = *rmap >> KVMPPC_RMAP_RC_SHIFT; | ||||
| 	r &= rcbits | ~(HPTE_R_R | HPTE_R_C); | ||||
| 
 | ||||
| 	if (hptep[0] & HPTE_V_VALID) { | ||||
| 	if (be64_to_cpu(hptep[0]) & HPTE_V_VALID) { | ||||
| 		/* HPTE was previously valid, so we need to invalidate it */ | ||||
| 		unlock_rmap(rmap); | ||||
| 		hptep[0] |= HPTE_V_ABSENT; | ||||
| 		hptep[0] |= cpu_to_be64(HPTE_V_ABSENT); | ||||
| 		kvmppc_invalidate_hpte(kvm, hptep, index); | ||||
| 		/* don't lose previous R and C bits */ | ||||
| 		r |= hptep[1] & (HPTE_R_R | HPTE_R_C); | ||||
| 		r |= be64_to_cpu(hptep[1]) & (HPTE_R_R | HPTE_R_C); | ||||
| 	} else { | ||||
| 		kvmppc_add_revmap_chain(kvm, rev, rmap, index, 0); | ||||
| 	} | ||||
| 
 | ||||
| 	hptep[1] = r; | ||||
| 	hptep[1] = cpu_to_be64(r); | ||||
| 	eieio(); | ||||
| 	hptep[0] = hpte[0]; | ||||
| 	hptep[0] = cpu_to_be64(hpte[0]); | ||||
| 	asm volatile("ptesync" : : : "memory"); | ||||
| 	preempt_enable(); | ||||
| 	if (page && hpte_is_writable(r)) | ||||
| @ -784,7 +779,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	return ret; | ||||
| 
 | ||||
|  out_unlock: | ||||
| 	hptep[0] &= ~HPTE_V_HVLOCK; | ||||
| 	hptep[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 	preempt_enable(); | ||||
| 	goto out_put; | ||||
| } | ||||
| @ -860,7 +855,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| { | ||||
| 	struct revmap_entry *rev = kvm->arch.revmap; | ||||
| 	unsigned long h, i, j; | ||||
| 	unsigned long *hptep; | ||||
| 	__be64 *hptep; | ||||
| 	unsigned long ptel, psize, rcbits; | ||||
| 
 | ||||
| 	for (;;) { | ||||
| @ -876,11 +871,11 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 		 * rmap chain lock. | ||||
| 		 */ | ||||
| 		i = *rmapp & KVMPPC_RMAP_INDEX; | ||||
| 		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		hptep = (__be64 *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) { | ||||
| 			/* unlock rmap before spinning on the HPTE lock */ | ||||
| 			unlock_rmap(rmapp); | ||||
| 			while (hptep[0] & HPTE_V_HVLOCK) | ||||
| 			while (be64_to_cpu(hptep[0]) & HPTE_V_HVLOCK) | ||||
| 				cpu_relax(); | ||||
| 			continue; | ||||
| 		} | ||||
| @ -899,14 +894,14 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 
 | ||||
| 		/* Now check and modify the HPTE */ | ||||
| 		ptel = rev[i].guest_rpte; | ||||
| 		psize = hpte_page_size(hptep[0], ptel); | ||||
| 		if ((hptep[0] & HPTE_V_VALID) && | ||||
| 		psize = hpte_page_size(be64_to_cpu(hptep[0]), ptel); | ||||
| 		if ((be64_to_cpu(hptep[0]) & HPTE_V_VALID) && | ||||
| 		    hpte_rpn(ptel, psize) == gfn) { | ||||
| 			if (kvm->arch.using_mmu_notifiers) | ||||
| 				hptep[0] |= HPTE_V_ABSENT; | ||||
| 				hptep[0] |= cpu_to_be64(HPTE_V_ABSENT); | ||||
| 			kvmppc_invalidate_hpte(kvm, hptep, i); | ||||
| 			/* Harvest R and C */ | ||||
| 			rcbits = hptep[1] & (HPTE_R_R | HPTE_R_C); | ||||
| 			rcbits = be64_to_cpu(hptep[1]) & (HPTE_R_R | HPTE_R_C); | ||||
| 			*rmapp |= rcbits << KVMPPC_RMAP_RC_SHIFT; | ||||
| 			if (rcbits & ~rev[i].guest_rpte) { | ||||
| 				rev[i].guest_rpte = ptel | rcbits; | ||||
| @ -914,7 +909,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 			} | ||||
| 		} | ||||
| 		unlock_rmap(rmapp); | ||||
| 		hptep[0] &= ~HPTE_V_HVLOCK; | ||||
| 		hptep[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 	} | ||||
| 	return 0; | ||||
| } | ||||
| @ -961,7 +956,7 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| { | ||||
| 	struct revmap_entry *rev = kvm->arch.revmap; | ||||
| 	unsigned long head, i, j; | ||||
| 	unsigned long *hptep; | ||||
| 	__be64 *hptep; | ||||
| 	int ret = 0; | ||||
| 
 | ||||
|  retry: | ||||
| @ -977,23 +972,24 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 
 | ||||
| 	i = head = *rmapp & KVMPPC_RMAP_INDEX; | ||||
| 	do { | ||||
| 		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		hptep = (__be64 *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		j = rev[i].forw; | ||||
| 
 | ||||
| 		/* If this HPTE isn't referenced, ignore it */ | ||||
| 		if (!(hptep[1] & HPTE_R_R)) | ||||
| 		if (!(be64_to_cpu(hptep[1]) & HPTE_R_R)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) { | ||||
| 			/* unlock rmap before spinning on the HPTE lock */ | ||||
| 			unlock_rmap(rmapp); | ||||
| 			while (hptep[0] & HPTE_V_HVLOCK) | ||||
| 			while (be64_to_cpu(hptep[0]) & HPTE_V_HVLOCK) | ||||
| 				cpu_relax(); | ||||
| 			goto retry; | ||||
| 		} | ||||
| 
 | ||||
| 		/* Now check and modify the HPTE */ | ||||
| 		if ((hptep[0] & HPTE_V_VALID) && (hptep[1] & HPTE_R_R)) { | ||||
| 		if ((be64_to_cpu(hptep[0]) & HPTE_V_VALID) && | ||||
| 		    (be64_to_cpu(hptep[1]) & HPTE_R_R)) { | ||||
| 			kvmppc_clear_ref_hpte(kvm, hptep, i); | ||||
| 			if (!(rev[i].guest_rpte & HPTE_R_R)) { | ||||
| 				rev[i].guest_rpte |= HPTE_R_R; | ||||
| @ -1001,7 +997,7 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 			} | ||||
| 			ret = 1; | ||||
| 		} | ||||
| 		hptep[0] &= ~HPTE_V_HVLOCK; | ||||
| 		hptep[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 	} while ((i = j) != head); | ||||
| 
 | ||||
| 	unlock_rmap(rmapp); | ||||
| @ -1035,7 +1031,7 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp, | ||||
| 		do { | ||||
| 			hp = (unsigned long *)(kvm->arch.hpt_virt + (i << 4)); | ||||
| 			j = rev[i].forw; | ||||
| 			if (hp[1] & HPTE_R_R) | ||||
| 			if (be64_to_cpu(hp[1]) & HPTE_R_R) | ||||
| 				goto out; | ||||
| 		} while ((i = j) != head); | ||||
| 	} | ||||
| @ -1075,7 +1071,7 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp) | ||||
| 	unsigned long head, i, j; | ||||
| 	unsigned long n; | ||||
| 	unsigned long v, r; | ||||
| 	unsigned long *hptep; | ||||
| 	__be64 *hptep; | ||||
| 	int npages_dirty = 0; | ||||
| 
 | ||||
|  retry: | ||||
| @ -1091,7 +1087,8 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp) | ||||
| 
 | ||||
| 	i = head = *rmapp & KVMPPC_RMAP_INDEX; | ||||
| 	do { | ||||
| 		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		unsigned long hptep1; | ||||
| 		hptep = (__be64 *) (kvm->arch.hpt_virt + (i << 4)); | ||||
| 		j = rev[i].forw; | ||||
| 
 | ||||
| 		/*
 | ||||
| @ -1108,29 +1105,30 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp) | ||||
| 		 * Otherwise we need to do the tlbie even if C==0 in | ||||
| 		 * order to pick up any delayed writeback of C. | ||||
| 		 */ | ||||
| 		if (!(hptep[1] & HPTE_R_C) && | ||||
| 		    (!hpte_is_writable(hptep[1]) || vcpus_running(kvm))) | ||||
| 		hptep1 = be64_to_cpu(hptep[1]); | ||||
| 		if (!(hptep1 & HPTE_R_C) && | ||||
| 		    (!hpte_is_writable(hptep1) || vcpus_running(kvm))) | ||||
| 			continue; | ||||
| 
 | ||||
| 		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) { | ||||
| 			/* unlock rmap before spinning on the HPTE lock */ | ||||
| 			unlock_rmap(rmapp); | ||||
| 			while (hptep[0] & HPTE_V_HVLOCK) | ||||
| 			while (hptep[0] & cpu_to_be64(HPTE_V_HVLOCK)) | ||||
| 				cpu_relax(); | ||||
| 			goto retry; | ||||
| 		} | ||||
| 
 | ||||
| 		/* Now check and modify the HPTE */ | ||||
| 		if (!(hptep[0] & HPTE_V_VALID)) | ||||
| 		if (!(hptep[0] & cpu_to_be64(HPTE_V_VALID))) | ||||
| 			continue; | ||||
| 
 | ||||
| 		/* need to make it temporarily absent so C is stable */ | ||||
| 		hptep[0] |= HPTE_V_ABSENT; | ||||
| 		hptep[0] |= cpu_to_be64(HPTE_V_ABSENT); | ||||
| 		kvmppc_invalidate_hpte(kvm, hptep, i); | ||||
| 		v = hptep[0]; | ||||
| 		r = hptep[1]; | ||||
| 		v = be64_to_cpu(hptep[0]); | ||||
| 		r = be64_to_cpu(hptep[1]); | ||||
| 		if (r & HPTE_R_C) { | ||||
| 			hptep[1] = r & ~HPTE_R_C; | ||||
| 			hptep[1] = cpu_to_be64(r & ~HPTE_R_C); | ||||
| 			if (!(rev[i].guest_rpte & HPTE_R_C)) { | ||||
| 				rev[i].guest_rpte |= HPTE_R_C; | ||||
| 				note_hpte_modification(kvm, &rev[i]); | ||||
| @ -1143,7 +1141,7 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp) | ||||
| 		} | ||||
| 		v &= ~(HPTE_V_ABSENT | HPTE_V_HVLOCK); | ||||
| 		v |= HPTE_V_VALID; | ||||
| 		hptep[0] = v; | ||||
| 		hptep[0] = cpu_to_be64(v); | ||||
| 	} while ((i = j) != head); | ||||
| 
 | ||||
| 	unlock_rmap(rmapp); | ||||
| @ -1307,7 +1305,7 @@ struct kvm_htab_ctx { | ||||
|  * Returns 1 if this HPT entry has been modified or has pending | ||||
|  * R/C bit changes. | ||||
|  */ | ||||
| static int hpte_dirty(struct revmap_entry *revp, unsigned long *hptp) | ||||
| static int hpte_dirty(struct revmap_entry *revp, __be64 *hptp) | ||||
| { | ||||
| 	unsigned long rcbits_unset; | ||||
| 
 | ||||
| @ -1316,13 +1314,14 @@ static int hpte_dirty(struct revmap_entry *revp, unsigned long *hptp) | ||||
| 
 | ||||
| 	/* Also need to consider changes in reference and changed bits */ | ||||
| 	rcbits_unset = ~revp->guest_rpte & (HPTE_R_R | HPTE_R_C); | ||||
| 	if ((hptp[0] & HPTE_V_VALID) && (hptp[1] & rcbits_unset)) | ||||
| 	if ((be64_to_cpu(hptp[0]) & HPTE_V_VALID) && | ||||
| 	    (be64_to_cpu(hptp[1]) & rcbits_unset)) | ||||
| 		return 1; | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static long record_hpte(unsigned long flags, unsigned long *hptp, | ||||
| static long record_hpte(unsigned long flags, __be64 *hptp, | ||||
| 			unsigned long *hpte, struct revmap_entry *revp, | ||||
| 			int want_valid, int first_pass) | ||||
| { | ||||
| @ -1337,10 +1336,10 @@ static long record_hpte(unsigned long flags, unsigned long *hptp, | ||||
| 		return 0; | ||||
| 
 | ||||
| 	valid = 0; | ||||
| 	if (hptp[0] & (HPTE_V_VALID | HPTE_V_ABSENT)) { | ||||
| 	if (be64_to_cpu(hptp[0]) & (HPTE_V_VALID | HPTE_V_ABSENT)) { | ||||
| 		valid = 1; | ||||
| 		if ((flags & KVM_GET_HTAB_BOLTED_ONLY) && | ||||
| 		    !(hptp[0] & HPTE_V_BOLTED)) | ||||
| 		    !(be64_to_cpu(hptp[0]) & HPTE_V_BOLTED)) | ||||
| 			valid = 0; | ||||
| 	} | ||||
| 	if (valid != want_valid) | ||||
| @ -1352,7 +1351,7 @@ static long record_hpte(unsigned long flags, unsigned long *hptp, | ||||
| 		preempt_disable(); | ||||
| 		while (!try_lock_hpte(hptp, HPTE_V_HVLOCK)) | ||||
| 			cpu_relax(); | ||||
| 		v = hptp[0]; | ||||
| 		v = be64_to_cpu(hptp[0]); | ||||
| 
 | ||||
| 		/* re-evaluate valid and dirty from synchronized HPTE value */ | ||||
| 		valid = !!(v & HPTE_V_VALID); | ||||
| @ -1360,9 +1359,9 @@ static long record_hpte(unsigned long flags, unsigned long *hptp, | ||||
| 
 | ||||
| 		/* Harvest R and C into guest view if necessary */ | ||||
| 		rcbits_unset = ~revp->guest_rpte & (HPTE_R_R | HPTE_R_C); | ||||
| 		if (valid && (rcbits_unset & hptp[1])) { | ||||
| 			revp->guest_rpte |= (hptp[1] & (HPTE_R_R | HPTE_R_C)) | | ||||
| 				HPTE_GR_MODIFIED; | ||||
| 		if (valid && (rcbits_unset & be64_to_cpu(hptp[1]))) { | ||||
| 			revp->guest_rpte |= (be64_to_cpu(hptp[1]) & | ||||
| 				(HPTE_R_R | HPTE_R_C)) | HPTE_GR_MODIFIED; | ||||
| 			dirty = 1; | ||||
| 		} | ||||
| 
 | ||||
| @ -1381,13 +1380,13 @@ static long record_hpte(unsigned long flags, unsigned long *hptp, | ||||
| 			revp->guest_rpte = r; | ||||
| 		} | ||||
| 		asm volatile(PPC_RELEASE_BARRIER "" : : : "memory"); | ||||
| 		hptp[0] &= ~HPTE_V_HVLOCK; | ||||
| 		hptp[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 		preempt_enable(); | ||||
| 		if (!(valid == want_valid && (first_pass || dirty))) | ||||
| 			ok = 0; | ||||
| 	} | ||||
| 	hpte[0] = v; | ||||
| 	hpte[1] = r; | ||||
| 	hpte[0] = cpu_to_be64(v); | ||||
| 	hpte[1] = cpu_to_be64(r); | ||||
| 	return ok; | ||||
| } | ||||
| 
 | ||||
| @ -1397,7 +1396,7 @@ static ssize_t kvm_htab_read(struct file *file, char __user *buf, | ||||
| 	struct kvm_htab_ctx *ctx = file->private_data; | ||||
| 	struct kvm *kvm = ctx->kvm; | ||||
| 	struct kvm_get_htab_header hdr; | ||||
| 	unsigned long *hptp; | ||||
| 	__be64 *hptp; | ||||
| 	struct revmap_entry *revp; | ||||
| 	unsigned long i, nb, nw; | ||||
| 	unsigned long __user *lbuf; | ||||
| @ -1413,7 +1412,7 @@ static ssize_t kvm_htab_read(struct file *file, char __user *buf, | ||||
| 	flags = ctx->flags; | ||||
| 
 | ||||
| 	i = ctx->index; | ||||
| 	hptp = (unsigned long *)(kvm->arch.hpt_virt + (i * HPTE_SIZE)); | ||||
| 	hptp = (__be64 *)(kvm->arch.hpt_virt + (i * HPTE_SIZE)); | ||||
| 	revp = kvm->arch.revmap + i; | ||||
| 	lbuf = (unsigned long __user *)buf; | ||||
| 
 | ||||
| @ -1497,7 +1496,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf, | ||||
| 	unsigned long i, j; | ||||
| 	unsigned long v, r; | ||||
| 	unsigned long __user *lbuf; | ||||
| 	unsigned long *hptp; | ||||
| 	__be64 *hptp; | ||||
| 	unsigned long tmp[2]; | ||||
| 	ssize_t nb; | ||||
| 	long int err, ret; | ||||
| @ -1539,7 +1538,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf, | ||||
| 		    i + hdr.n_valid + hdr.n_invalid > kvm->arch.hpt_npte) | ||||
| 			break; | ||||
| 
 | ||||
| 		hptp = (unsigned long *)(kvm->arch.hpt_virt + (i * HPTE_SIZE)); | ||||
| 		hptp = (__be64 *)(kvm->arch.hpt_virt + (i * HPTE_SIZE)); | ||||
| 		lbuf = (unsigned long __user *)buf; | ||||
| 		for (j = 0; j < hdr.n_valid; ++j) { | ||||
| 			err = -EFAULT; | ||||
| @ -1551,7 +1550,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf, | ||||
| 			lbuf += 2; | ||||
| 			nb += HPTE_SIZE; | ||||
| 
 | ||||
| 			if (hptp[0] & (HPTE_V_VALID | HPTE_V_ABSENT)) | ||||
| 			if (be64_to_cpu(hptp[0]) & (HPTE_V_VALID | HPTE_V_ABSENT)) | ||||
| 				kvmppc_do_h_remove(kvm, 0, i, 0, tmp); | ||||
| 			err = -EIO; | ||||
| 			ret = kvmppc_virtmode_do_h_enter(kvm, H_EXACT, i, v, r, | ||||
| @ -1577,7 +1576,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf, | ||||
| 		} | ||||
| 
 | ||||
| 		for (j = 0; j < hdr.n_invalid; ++j) { | ||||
| 			if (hptp[0] & (HPTE_V_VALID | HPTE_V_ABSENT)) | ||||
| 			if (be64_to_cpu(hptp[0]) & (HPTE_V_VALID | HPTE_V_ABSENT)) | ||||
| 				kvmppc_do_h_remove(kvm, 0, i, 0, tmp); | ||||
| 			++i; | ||||
| 			hptp += 2; | ||||
|  | ||||
| @ -439,12 +439,6 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val) | ||||
| 		    (mfmsr() & MSR_HV)) | ||||
| 			vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32; | ||||
| 		break; | ||||
| 	case SPRN_PURR: | ||||
| 		to_book3s(vcpu)->purr_offset = spr_val - get_tb(); | ||||
| 		break; | ||||
| 	case SPRN_SPURR: | ||||
| 		to_book3s(vcpu)->spurr_offset = spr_val - get_tb(); | ||||
| 		break; | ||||
| 	case SPRN_GQR0: | ||||
| 	case SPRN_GQR1: | ||||
| 	case SPRN_GQR2: | ||||
| @ -455,10 +449,10 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val) | ||||
| 	case SPRN_GQR7: | ||||
| 		to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val; | ||||
| 		break; | ||||
| 	case SPRN_FSCR: | ||||
| 		vcpu->arch.fscr = spr_val; | ||||
| 		break; | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	case SPRN_FSCR: | ||||
| 		kvmppc_set_fscr(vcpu, spr_val); | ||||
| 		break; | ||||
| 	case SPRN_BESCR: | ||||
| 		vcpu->arch.bescr = spr_val; | ||||
| 		break; | ||||
| @ -572,10 +566,22 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val | ||||
| 		*spr_val = 0; | ||||
| 		break; | ||||
| 	case SPRN_PURR: | ||||
| 		*spr_val = get_tb() + to_book3s(vcpu)->purr_offset; | ||||
| 		/*
 | ||||
| 		 * On exit we would have updated purr | ||||
| 		 */ | ||||
| 		*spr_val = vcpu->arch.purr; | ||||
| 		break; | ||||
| 	case SPRN_SPURR: | ||||
| 		*spr_val = get_tb() + to_book3s(vcpu)->purr_offset; | ||||
| 		/*
 | ||||
| 		 * On exit we would have updated spurr | ||||
| 		 */ | ||||
| 		*spr_val = vcpu->arch.spurr; | ||||
| 		break; | ||||
| 	case SPRN_VTB: | ||||
| 		*spr_val = vcpu->arch.vtb; | ||||
| 		break; | ||||
| 	case SPRN_IC: | ||||
| 		*spr_val = vcpu->arch.ic; | ||||
| 		break; | ||||
| 	case SPRN_GQR0: | ||||
| 	case SPRN_GQR1: | ||||
| @ -587,10 +593,10 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val | ||||
| 	case SPRN_GQR7: | ||||
| 		*spr_val = to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]; | ||||
| 		break; | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	case SPRN_FSCR: | ||||
| 		*spr_val = vcpu->arch.fscr; | ||||
| 		break; | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	case SPRN_BESCR: | ||||
| 		*spr_val = vcpu->arch.bescr; | ||||
| 		break; | ||||
|  | ||||
| @ -35,6 +35,7 @@ | ||||
| 
 | ||||
| #include <asm/reg.h> | ||||
| #include <asm/cputable.h> | ||||
| #include <asm/cache.h> | ||||
| #include <asm/cacheflush.h> | ||||
| #include <asm/tlbflush.h> | ||||
| #include <asm/uaccess.h> | ||||
| @ -67,6 +68,15 @@ | ||||
| /* Used as a "null" value for timebase values */ | ||||
| #define TB_NIL	(~(u64)0) | ||||
| 
 | ||||
| static DECLARE_BITMAP(default_enabled_hcalls, MAX_HCALL_OPCODE/4 + 1); | ||||
| 
 | ||||
| #if defined(CONFIG_PPC_64K_PAGES) | ||||
| #define MPP_BUFFER_ORDER	0 | ||||
| #elif defined(CONFIG_PPC_4K_PAGES) | ||||
| #define MPP_BUFFER_ORDER	3 | ||||
| #endif | ||||
| 
 | ||||
| 
 | ||||
| static void kvmppc_end_cede(struct kvm_vcpu *vcpu); | ||||
| static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); | ||||
| 
 | ||||
| @ -270,7 +280,7 @@ struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id) | ||||
| static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa) | ||||
| { | ||||
| 	vpa->__old_status |= LPPACA_OLD_SHARED_PROC; | ||||
| 	vpa->yield_count = 1; | ||||
| 	vpa->yield_count = cpu_to_be32(1); | ||||
| } | ||||
| 
 | ||||
| static int set_vpa(struct kvm_vcpu *vcpu, struct kvmppc_vpa *v, | ||||
| @ -293,8 +303,8 @@ static int set_vpa(struct kvm_vcpu *vcpu, struct kvmppc_vpa *v, | ||||
| struct reg_vpa { | ||||
| 	u32 dummy; | ||||
| 	union { | ||||
| 		u16 hword; | ||||
| 		u32 word; | ||||
| 		__be16 hword; | ||||
| 		__be32 word; | ||||
| 	} length; | ||||
| }; | ||||
| 
 | ||||
| @ -333,9 +343,9 @@ static unsigned long do_h_register_vpa(struct kvm_vcpu *vcpu, | ||||
| 		if (va == NULL) | ||||
| 			return H_PARAMETER; | ||||
| 		if (subfunc == H_VPA_REG_VPA) | ||||
| 			len = ((struct reg_vpa *)va)->length.hword; | ||||
| 			len = be16_to_cpu(((struct reg_vpa *)va)->length.hword); | ||||
| 		else | ||||
| 			len = ((struct reg_vpa *)va)->length.word; | ||||
| 			len = be32_to_cpu(((struct reg_vpa *)va)->length.word); | ||||
| 		kvmppc_unpin_guest_page(kvm, va, vpa, false); | ||||
| 
 | ||||
| 		/* Check length */ | ||||
| @ -540,21 +550,63 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, | ||||
| 		return; | ||||
| 	memset(dt, 0, sizeof(struct dtl_entry)); | ||||
| 	dt->dispatch_reason = 7; | ||||
| 	dt->processor_id = vc->pcpu + vcpu->arch.ptid; | ||||
| 	dt->timebase = now + vc->tb_offset; | ||||
| 	dt->enqueue_to_dispatch_time = stolen; | ||||
| 	dt->srr0 = kvmppc_get_pc(vcpu); | ||||
| 	dt->srr1 = vcpu->arch.shregs.msr; | ||||
| 	dt->processor_id = cpu_to_be16(vc->pcpu + vcpu->arch.ptid); | ||||
| 	dt->timebase = cpu_to_be64(now + vc->tb_offset); | ||||
| 	dt->enqueue_to_dispatch_time = cpu_to_be32(stolen); | ||||
| 	dt->srr0 = cpu_to_be64(kvmppc_get_pc(vcpu)); | ||||
| 	dt->srr1 = cpu_to_be64(vcpu->arch.shregs.msr); | ||||
| 	++dt; | ||||
| 	if (dt == vcpu->arch.dtl.pinned_end) | ||||
| 		dt = vcpu->arch.dtl.pinned_addr; | ||||
| 	vcpu->arch.dtl_ptr = dt; | ||||
| 	/* order writing *dt vs. writing vpa->dtl_idx */ | ||||
| 	smp_wmb(); | ||||
| 	vpa->dtl_idx = ++vcpu->arch.dtl_index; | ||||
| 	vpa->dtl_idx = cpu_to_be64(++vcpu->arch.dtl_index); | ||||
| 	vcpu->arch.dtl.dirty = true; | ||||
| } | ||||
| 
 | ||||
| static bool kvmppc_power8_compatible(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	if (vcpu->arch.vcore->arch_compat >= PVR_ARCH_207) | ||||
| 		return true; | ||||
| 	if ((!vcpu->arch.vcore->arch_compat) && | ||||
| 	    cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		return true; | ||||
| 	return false; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags, | ||||
| 			     unsigned long resource, unsigned long value1, | ||||
| 			     unsigned long value2) | ||||
| { | ||||
| 	switch (resource) { | ||||
| 	case H_SET_MODE_RESOURCE_SET_CIABR: | ||||
| 		if (!kvmppc_power8_compatible(vcpu)) | ||||
| 			return H_P2; | ||||
| 		if (value2) | ||||
| 			return H_P4; | ||||
| 		if (mflags) | ||||
| 			return H_UNSUPPORTED_FLAG_START; | ||||
| 		/* Guests can't breakpoint the hypervisor */ | ||||
| 		if ((value1 & CIABR_PRIV) == CIABR_PRIV_HYPER) | ||||
| 			return H_P3; | ||||
| 		vcpu->arch.ciabr  = value1; | ||||
| 		return H_SUCCESS; | ||||
| 	case H_SET_MODE_RESOURCE_SET_DAWR: | ||||
| 		if (!kvmppc_power8_compatible(vcpu)) | ||||
| 			return H_P2; | ||||
| 		if (mflags) | ||||
| 			return H_UNSUPPORTED_FLAG_START; | ||||
| 		if (value2 & DABRX_HYP) | ||||
| 			return H_P4; | ||||
| 		vcpu->arch.dawr  = value1; | ||||
| 		vcpu->arch.dawrx = value2; | ||||
| 		return H_SUCCESS; | ||||
| 	default: | ||||
| 		return H_TOO_HARD; | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	unsigned long req = kvmppc_get_gpr(vcpu, 3); | ||||
| @ -562,6 +614,10 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) | ||||
| 	struct kvm_vcpu *tvcpu; | ||||
| 	int idx, rc; | ||||
| 
 | ||||
| 	if (req <= MAX_HCALL_OPCODE && | ||||
| 	    !test_bit(req/4, vcpu->kvm->arch.enabled_hcalls)) | ||||
| 		return RESUME_HOST; | ||||
| 
 | ||||
| 	switch (req) { | ||||
| 	case H_ENTER: | ||||
| 		idx = srcu_read_lock(&vcpu->kvm->srcu); | ||||
| @ -620,7 +676,14 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| 		/* Send the error out to userspace via KVM_RUN */ | ||||
| 		return rc; | ||||
| 
 | ||||
| 	case H_SET_MODE: | ||||
| 		ret = kvmppc_h_set_mode(vcpu, kvmppc_get_gpr(vcpu, 4), | ||||
| 					kvmppc_get_gpr(vcpu, 5), | ||||
| 					kvmppc_get_gpr(vcpu, 6), | ||||
| 					kvmppc_get_gpr(vcpu, 7)); | ||||
| 		if (ret == H_TOO_HARD) | ||||
| 			return RESUME_HOST; | ||||
| 		break; | ||||
| 	case H_XIRR: | ||||
| 	case H_CPPR: | ||||
| 	case H_EOI: | ||||
| @ -639,6 +702,29 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) | ||||
| 	return RESUME_GUEST; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_hcall_impl_hv(unsigned long cmd) | ||||
| { | ||||
| 	switch (cmd) { | ||||
| 	case H_CEDE: | ||||
| 	case H_PROD: | ||||
| 	case H_CONFER: | ||||
| 	case H_REGISTER_VPA: | ||||
| 	case H_SET_MODE: | ||||
| #ifdef CONFIG_KVM_XICS | ||||
| 	case H_XIRR: | ||||
| 	case H_CPPR: | ||||
| 	case H_EOI: | ||||
| 	case H_IPI: | ||||
| 	case H_IPOLL: | ||||
| 	case H_XIRR_X: | ||||
| #endif | ||||
| 		return 1; | ||||
| 	} | ||||
| 
 | ||||
| 	/* See if it's in the real-mode table */ | ||||
| 	return kvmppc_hcall_impl_hv_realmode(cmd); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 				 struct task_struct *tsk) | ||||
| { | ||||
| @ -785,7 +871,8 @@ static int kvm_arch_vcpu_ioctl_set_sregs_hv(struct kvm_vcpu *vcpu, | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr) | ||||
| static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, | ||||
| 		bool preserve_top32) | ||||
| { | ||||
| 	struct kvmppc_vcore *vc = vcpu->arch.vcore; | ||||
| 	u64 mask; | ||||
| @ -820,6 +907,10 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr) | ||||
| 	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC; | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		mask |= LPCR_AIL; | ||||
| 
 | ||||
| 	/* Broken 32-bit version of LPCR must not clear top bits */ | ||||
| 	if (preserve_top32) | ||||
| 		mask &= 0xFFFFFFFF; | ||||
| 	vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask); | ||||
| 	spin_unlock(&vc->lock); | ||||
| } | ||||
| @ -894,12 +985,6 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, | ||||
| 	case KVM_REG_PPC_CIABR: | ||||
| 		*val = get_reg_val(id, vcpu->arch.ciabr); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_IC: | ||||
| 		*val = get_reg_val(id, vcpu->arch.ic); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_VTB: | ||||
| 		*val = get_reg_val(id, vcpu->arch.vtb); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_CSIGR: | ||||
| 		*val = get_reg_val(id, vcpu->arch.csigr); | ||||
| 		break; | ||||
| @ -939,6 +1024,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, | ||||
| 		*val = get_reg_val(id, vcpu->arch.vcore->tb_offset); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_LPCR: | ||||
| 	case KVM_REG_PPC_LPCR_64: | ||||
| 		*val = get_reg_val(id, vcpu->arch.vcore->lpcr); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_PPR: | ||||
| @ -1094,12 +1180,6 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, | ||||
| 		if ((vcpu->arch.ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) | ||||
| 			vcpu->arch.ciabr &= ~CIABR_PRIV;	/* disable */ | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_IC: | ||||
| 		vcpu->arch.ic = set_reg_val(id, *val); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_VTB: | ||||
| 		vcpu->arch.vtb = set_reg_val(id, *val); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_CSIGR: | ||||
| 		vcpu->arch.csigr = set_reg_val(id, *val); | ||||
| 		break; | ||||
| @ -1150,7 +1230,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, | ||||
| 			ALIGN(set_reg_val(id, *val), 1UL << 24); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_LPCR: | ||||
| 		kvmppc_set_lpcr(vcpu, set_reg_val(id, *val)); | ||||
| 		kvmppc_set_lpcr(vcpu, set_reg_val(id, *val), true); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_LPCR_64: | ||||
| 		kvmppc_set_lpcr(vcpu, set_reg_val(id, *val), false); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_PPR: | ||||
| 		vcpu->arch.ppr = set_reg_val(id, *val); | ||||
| @ -1228,6 +1311,33 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core) | ||||
| { | ||||
| 	struct kvmppc_vcore *vcore; | ||||
| 
 | ||||
| 	vcore = kzalloc(sizeof(struct kvmppc_vcore), GFP_KERNEL); | ||||
| 
 | ||||
| 	if (vcore == NULL) | ||||
| 		return NULL; | ||||
| 
 | ||||
| 	INIT_LIST_HEAD(&vcore->runnable_threads); | ||||
| 	spin_lock_init(&vcore->lock); | ||||
| 	init_waitqueue_head(&vcore->wq); | ||||
| 	vcore->preempt_tb = TB_NIL; | ||||
| 	vcore->lpcr = kvm->arch.lpcr; | ||||
| 	vcore->first_vcpuid = core * threads_per_subcore; | ||||
| 	vcore->kvm = kvm; | ||||
| 
 | ||||
| 	vcore->mpp_buffer_is_valid = false; | ||||
| 
 | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		vcore->mpp_buffer = (void *)__get_free_pages( | ||||
| 			GFP_KERNEL|__GFP_ZERO, | ||||
| 			MPP_BUFFER_ORDER); | ||||
| 
 | ||||
| 	return vcore; | ||||
| } | ||||
| 
 | ||||
| static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm, | ||||
| 						   unsigned int id) | ||||
| { | ||||
| @ -1279,16 +1389,7 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm, | ||||
| 	mutex_lock(&kvm->lock); | ||||
| 	vcore = kvm->arch.vcores[core]; | ||||
| 	if (!vcore) { | ||||
| 		vcore = kzalloc(sizeof(struct kvmppc_vcore), GFP_KERNEL); | ||||
| 		if (vcore) { | ||||
| 			INIT_LIST_HEAD(&vcore->runnable_threads); | ||||
| 			spin_lock_init(&vcore->lock); | ||||
| 			init_waitqueue_head(&vcore->wq); | ||||
| 			vcore->preempt_tb = TB_NIL; | ||||
| 			vcore->lpcr = kvm->arch.lpcr; | ||||
| 			vcore->first_vcpuid = core * threads_per_subcore; | ||||
| 			vcore->kvm = kvm; | ||||
| 		} | ||||
| 		vcore = kvmppc_vcore_create(kvm, core); | ||||
| 		kvm->arch.vcores[core] = vcore; | ||||
| 		kvm->arch.online_vcores++; | ||||
| 	} | ||||
| @ -1500,6 +1601,33 @@ static int on_primary_thread(void) | ||||
| 	return 1; | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_start_saving_l2_cache(struct kvmppc_vcore *vc) | ||||
| { | ||||
| 	phys_addr_t phy_addr, mpp_addr; | ||||
| 
 | ||||
| 	phy_addr = (phys_addr_t)virt_to_phys(vc->mpp_buffer); | ||||
| 	mpp_addr = phy_addr & PPC_MPPE_ADDRESS_MASK; | ||||
| 
 | ||||
| 	mtspr(SPRN_MPPR, mpp_addr | PPC_MPPR_FETCH_ABORT); | ||||
| 	logmpp(mpp_addr | PPC_LOGMPP_LOG_L2); | ||||
| 
 | ||||
| 	vc->mpp_buffer_is_valid = true; | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_start_restoring_l2_cache(const struct kvmppc_vcore *vc) | ||||
| { | ||||
| 	phys_addr_t phy_addr, mpp_addr; | ||||
| 
 | ||||
| 	phy_addr = virt_to_phys(vc->mpp_buffer); | ||||
| 	mpp_addr = phy_addr & PPC_MPPE_ADDRESS_MASK; | ||||
| 
 | ||||
| 	/* We must abort any in-progress save operations to ensure
 | ||||
| 	 * the table is valid so that prefetch engine knows when to | ||||
| 	 * stop prefetching. */ | ||||
| 	logmpp(mpp_addr | PPC_LOGMPP_LOG_ABORT); | ||||
| 	mtspr(SPRN_MPPR, mpp_addr | PPC_MPPR_FETCH_WHOLE_TABLE); | ||||
| } | ||||
| 
 | ||||
| /*
 | ||||
|  * Run a set of guest threads on a physical core. | ||||
|  * Called with vc->lock held. | ||||
| @ -1577,9 +1705,16 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc) | ||||
| 
 | ||||
| 	srcu_idx = srcu_read_lock(&vc->kvm->srcu); | ||||
| 
 | ||||
| 	if (vc->mpp_buffer_is_valid) | ||||
| 		kvmppc_start_restoring_l2_cache(vc); | ||||
| 
 | ||||
| 	__kvmppc_vcore_entry(); | ||||
| 
 | ||||
| 	spin_lock(&vc->lock); | ||||
| 
 | ||||
| 	if (vc->mpp_buffer) | ||||
| 		kvmppc_start_saving_l2_cache(vc); | ||||
| 
 | ||||
| 	/* disable sending of IPIs on virtual external irqs */ | ||||
| 	list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) | ||||
| 		vcpu->cpu = -1; | ||||
| @ -1929,12 +2064,6 @@ static void kvmppc_add_seg_page_size(struct kvm_ppc_one_seg_page_size **sps, | ||||
| 	(*sps)->page_shift = def->shift; | ||||
| 	(*sps)->slb_enc = def->sllp; | ||||
| 	(*sps)->enc[0].page_shift = def->shift; | ||||
| 	/*
 | ||||
| 	 * Only return base page encoding. We don't want to return | ||||
| 	 * all the supporting pte_enc, because our H_ENTER doesn't | ||||
| 	 * support MPSS yet. Once they do, we can start passing all | ||||
| 	 * support pte_enc here | ||||
| 	 */ | ||||
| 	(*sps)->enc[0].pte_enc = def->penc[linux_psize]; | ||||
| 	/*
 | ||||
| 	 * Add 16MB MPSS support if host supports it | ||||
| @ -2281,6 +2410,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) | ||||
| 	 */ | ||||
| 	cpumask_setall(&kvm->arch.need_tlb_flush); | ||||
| 
 | ||||
| 	/* Start out with the default set of hcalls enabled */ | ||||
| 	memcpy(kvm->arch.enabled_hcalls, default_enabled_hcalls, | ||||
| 	       sizeof(kvm->arch.enabled_hcalls)); | ||||
| 
 | ||||
| 	kvm->arch.rma = NULL; | ||||
| 
 | ||||
| 	kvm->arch.host_sdr1 = mfspr(SPRN_SDR1); | ||||
| @ -2323,8 +2456,14 @@ static void kvmppc_free_vcores(struct kvm *kvm) | ||||
| { | ||||
| 	long int i; | ||||
| 
 | ||||
| 	for (i = 0; i < KVM_MAX_VCORES; ++i) | ||||
| 	for (i = 0; i < KVM_MAX_VCORES; ++i) { | ||||
| 		if (kvm->arch.vcores[i] && kvm->arch.vcores[i]->mpp_buffer) { | ||||
| 			struct kvmppc_vcore *vc = kvm->arch.vcores[i]; | ||||
| 			free_pages((unsigned long)vc->mpp_buffer, | ||||
| 				   MPP_BUFFER_ORDER); | ||||
| 		} | ||||
| 		kfree(kvm->arch.vcores[i]); | ||||
| 	} | ||||
| 	kvm->arch.online_vcores = 0; | ||||
| } | ||||
| 
 | ||||
| @ -2419,6 +2558,49 @@ static long kvm_arch_vm_ioctl_hv(struct file *filp, | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| /*
 | ||||
|  * List of hcall numbers to enable by default. | ||||
|  * For compatibility with old userspace, we enable by default | ||||
|  * all hcalls that were implemented before the hcall-enabling | ||||
|  * facility was added.  Note this list should not include H_RTAS. | ||||
|  */ | ||||
| static unsigned int default_hcall_list[] = { | ||||
| 	H_REMOVE, | ||||
| 	H_ENTER, | ||||
| 	H_READ, | ||||
| 	H_PROTECT, | ||||
| 	H_BULK_REMOVE, | ||||
| 	H_GET_TCE, | ||||
| 	H_PUT_TCE, | ||||
| 	H_SET_DABR, | ||||
| 	H_SET_XDABR, | ||||
| 	H_CEDE, | ||||
| 	H_PROD, | ||||
| 	H_CONFER, | ||||
| 	H_REGISTER_VPA, | ||||
| #ifdef CONFIG_KVM_XICS | ||||
| 	H_EOI, | ||||
| 	H_CPPR, | ||||
| 	H_IPI, | ||||
| 	H_IPOLL, | ||||
| 	H_XIRR, | ||||
| 	H_XIRR_X, | ||||
| #endif | ||||
| 	0 | ||||
| }; | ||||
| 
 | ||||
| static void init_default_hcalls(void) | ||||
| { | ||||
| 	int i; | ||||
| 	unsigned int hcall; | ||||
| 
 | ||||
| 	for (i = 0; default_hcall_list[i]; ++i) { | ||||
| 		hcall = default_hcall_list[i]; | ||||
| 		WARN_ON(!kvmppc_hcall_impl_hv(hcall)); | ||||
| 		__set_bit(hcall / 4, default_enabled_hcalls); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| static struct kvmppc_ops kvm_ops_hv = { | ||||
| 	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv, | ||||
| 	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv, | ||||
| @ -2451,6 +2633,7 @@ static struct kvmppc_ops kvm_ops_hv = { | ||||
| 	.emulate_mfspr = kvmppc_core_emulate_mfspr_hv, | ||||
| 	.fast_vcpu_kick = kvmppc_fast_vcpu_kick_hv, | ||||
| 	.arch_vm_ioctl  = kvm_arch_vm_ioctl_hv, | ||||
| 	.hcall_implemented = kvmppc_hcall_impl_hv, | ||||
| }; | ||||
| 
 | ||||
| static int kvmppc_book3s_init_hv(void) | ||||
| @ -2466,6 +2649,8 @@ static int kvmppc_book3s_init_hv(void) | ||||
| 	kvm_ops_hv.owner = THIS_MODULE; | ||||
| 	kvmppc_hv_ops = &kvm_ops_hv; | ||||
| 
 | ||||
| 	init_default_hcalls(); | ||||
| 
 | ||||
| 	r = kvmppc_mmu_hv_init(); | ||||
| 	return r; | ||||
| } | ||||
|  | ||||
| @ -212,3 +212,16 @@ bool kvm_hv_mode_active(void) | ||||
| { | ||||
| 	return atomic_read(&hv_vm_count) != 0; | ||||
| } | ||||
| 
 | ||||
| extern int hcall_real_table[], hcall_real_table_end[]; | ||||
| 
 | ||||
| int kvmppc_hcall_impl_hv_realmode(unsigned long cmd) | ||||
| { | ||||
| 	cmd /= 4; | ||||
| 	if (cmd < hcall_real_table_end - hcall_real_table && | ||||
| 	    hcall_real_table[cmd]) | ||||
| 		return 1; | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_hcall_impl_hv_realmode); | ||||
|  | ||||
| @ -45,14 +45,14 @@ static void reload_slb(struct kvm_vcpu *vcpu) | ||||
| 		return; | ||||
| 
 | ||||
| 	/* Sanity check */ | ||||
| 	n = min_t(u32, slb->persistent, SLB_MIN_SIZE); | ||||
| 	n = min_t(u32, be32_to_cpu(slb->persistent), SLB_MIN_SIZE); | ||||
| 	if ((void *) &slb->save_area[n] > vcpu->arch.slb_shadow.pinned_end) | ||||
| 		return; | ||||
| 
 | ||||
| 	/* Load up the SLB from that */ | ||||
| 	for (i = 0; i < n; ++i) { | ||||
| 		unsigned long rb = slb->save_area[i].esid; | ||||
| 		unsigned long rs = slb->save_area[i].vsid; | ||||
| 		unsigned long rb = be64_to_cpu(slb->save_area[i].esid); | ||||
| 		unsigned long rs = be64_to_cpu(slb->save_area[i].vsid); | ||||
| 
 | ||||
| 		rb = (rb & ~0xFFFul) | i;	/* insert entry number */ | ||||
| 		asm volatile("slbmte %0,%1" : : "r" (rs), "r" (rb)); | ||||
|  | ||||
| @ -154,10 +154,10 @@ static pte_t lookup_linux_pte_and_update(pgd_t *pgdir, unsigned long hva, | ||||
| 	return kvmppc_read_update_linux_pte(ptep, writing, hugepage_shift); | ||||
| } | ||||
| 
 | ||||
| static inline void unlock_hpte(unsigned long *hpte, unsigned long hpte_v) | ||||
| static inline void unlock_hpte(__be64 *hpte, unsigned long hpte_v) | ||||
| { | ||||
| 	asm volatile(PPC_RELEASE_BARRIER "" : : : "memory"); | ||||
| 	hpte[0] = hpte_v; | ||||
| 	hpte[0] = cpu_to_be64(hpte_v); | ||||
| } | ||||
| 
 | ||||
| long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| @ -166,7 +166,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| { | ||||
| 	unsigned long i, pa, gpa, gfn, psize; | ||||
| 	unsigned long slot_fn, hva; | ||||
| 	unsigned long *hpte; | ||||
| 	__be64 *hpte; | ||||
| 	struct revmap_entry *rev; | ||||
| 	unsigned long g_ptel; | ||||
| 	struct kvm_memory_slot *memslot; | ||||
| @ -275,9 +275,9 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| 		return H_PARAMETER; | ||||
| 	if (likely((flags & H_EXACT) == 0)) { | ||||
| 		pte_index &= ~7UL; | ||||
| 		hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		hpte = (__be64 *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		for (i = 0; i < 8; ++i) { | ||||
| 			if ((*hpte & HPTE_V_VALID) == 0 && | ||||
| 			if ((be64_to_cpu(*hpte) & HPTE_V_VALID) == 0 && | ||||
| 			    try_lock_hpte(hpte, HPTE_V_HVLOCK | HPTE_V_VALID | | ||||
| 					  HPTE_V_ABSENT)) | ||||
| 				break; | ||||
| @ -292,11 +292,13 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| 			 */ | ||||
| 			hpte -= 16; | ||||
| 			for (i = 0; i < 8; ++i) { | ||||
| 				u64 pte; | ||||
| 				while (!try_lock_hpte(hpte, HPTE_V_HVLOCK)) | ||||
| 					cpu_relax(); | ||||
| 				if (!(*hpte & (HPTE_V_VALID | HPTE_V_ABSENT))) | ||||
| 				pte = be64_to_cpu(*hpte); | ||||
| 				if (!(pte & (HPTE_V_VALID | HPTE_V_ABSENT))) | ||||
| 					break; | ||||
| 				*hpte &= ~HPTE_V_HVLOCK; | ||||
| 				*hpte &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 				hpte += 2; | ||||
| 			} | ||||
| 			if (i == 8) | ||||
| @ -304,14 +306,17 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| 		} | ||||
| 		pte_index += i; | ||||
| 	} else { | ||||
| 		hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		hpte = (__be64 *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		if (!try_lock_hpte(hpte, HPTE_V_HVLOCK | HPTE_V_VALID | | ||||
| 				   HPTE_V_ABSENT)) { | ||||
| 			/* Lock the slot and check again */ | ||||
| 			u64 pte; | ||||
| 
 | ||||
| 			while (!try_lock_hpte(hpte, HPTE_V_HVLOCK)) | ||||
| 				cpu_relax(); | ||||
| 			if (*hpte & (HPTE_V_VALID | HPTE_V_ABSENT)) { | ||||
| 				*hpte &= ~HPTE_V_HVLOCK; | ||||
| 			pte = be64_to_cpu(*hpte); | ||||
| 			if (pte & (HPTE_V_VALID | HPTE_V_ABSENT)) { | ||||
| 				*hpte &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 				return H_PTEG_FULL; | ||||
| 			} | ||||
| 		} | ||||
| @ -347,11 +352,11 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, | ||||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	hpte[1] = ptel; | ||||
| 	hpte[1] = cpu_to_be64(ptel); | ||||
| 
 | ||||
| 	/* Write the first HPTE dword, unlocking the HPTE and making it valid */ | ||||
| 	eieio(); | ||||
| 	hpte[0] = pteh; | ||||
| 	hpte[0] = cpu_to_be64(pteh); | ||||
| 	asm volatile("ptesync" : : : "memory"); | ||||
| 
 | ||||
| 	*pte_idx_ret = pte_index; | ||||
| @ -468,30 +473,35 @@ long kvmppc_do_h_remove(struct kvm *kvm, unsigned long flags, | ||||
| 			unsigned long pte_index, unsigned long avpn, | ||||
| 			unsigned long *hpret) | ||||
| { | ||||
| 	unsigned long *hpte; | ||||
| 	__be64 *hpte; | ||||
| 	unsigned long v, r, rb; | ||||
| 	struct revmap_entry *rev; | ||||
| 	u64 pte; | ||||
| 
 | ||||
| 	if (pte_index >= kvm->arch.hpt_npte) | ||||
| 		return H_PARAMETER; | ||||
| 	hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 	hpte = (__be64 *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 	while (!try_lock_hpte(hpte, HPTE_V_HVLOCK)) | ||||
| 		cpu_relax(); | ||||
| 	if ((hpte[0] & (HPTE_V_ABSENT | HPTE_V_VALID)) == 0 || | ||||
| 	    ((flags & H_AVPN) && (hpte[0] & ~0x7fUL) != avpn) || | ||||
| 	    ((flags & H_ANDCOND) && (hpte[0] & avpn) != 0)) { | ||||
| 		hpte[0] &= ~HPTE_V_HVLOCK; | ||||
| 	pte = be64_to_cpu(hpte[0]); | ||||
| 	if ((pte & (HPTE_V_ABSENT | HPTE_V_VALID)) == 0 || | ||||
| 	    ((flags & H_AVPN) && (pte & ~0x7fUL) != avpn) || | ||||
| 	    ((flags & H_ANDCOND) && (pte & avpn) != 0)) { | ||||
| 		hpte[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 		return H_NOT_FOUND; | ||||
| 	} | ||||
| 
 | ||||
| 	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]); | ||||
| 	v = hpte[0] & ~HPTE_V_HVLOCK; | ||||
| 	v = pte & ~HPTE_V_HVLOCK; | ||||
| 	if (v & HPTE_V_VALID) { | ||||
| 		hpte[0] &= ~HPTE_V_VALID; | ||||
| 		rb = compute_tlbie_rb(v, hpte[1], pte_index); | ||||
| 		u64 pte1; | ||||
| 
 | ||||
| 		pte1 = be64_to_cpu(hpte[1]); | ||||
| 		hpte[0] &= ~cpu_to_be64(HPTE_V_VALID); | ||||
| 		rb = compute_tlbie_rb(v, pte1, pte_index); | ||||
| 		do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags), true); | ||||
| 		/* Read PTE low word after tlbie to get final R/C values */ | ||||
| 		remove_revmap_chain(kvm, pte_index, rev, v, hpte[1]); | ||||
| 		remove_revmap_chain(kvm, pte_index, rev, v, pte1); | ||||
| 	} | ||||
| 	r = rev->guest_rpte & ~HPTE_GR_RESERVED; | ||||
| 	note_hpte_modification(kvm, rev); | ||||
| @ -514,12 +524,14 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvm *kvm = vcpu->kvm; | ||||
| 	unsigned long *args = &vcpu->arch.gpr[4]; | ||||
| 	unsigned long *hp, *hptes[4], tlbrb[4]; | ||||
| 	__be64 *hp, *hptes[4]; | ||||
| 	unsigned long tlbrb[4]; | ||||
| 	long int i, j, k, n, found, indexes[4]; | ||||
| 	unsigned long flags, req, pte_index, rcbits; | ||||
| 	int global; | ||||
| 	long int ret = H_SUCCESS; | ||||
| 	struct revmap_entry *rev, *revs[4]; | ||||
| 	u64 hp0; | ||||
| 
 | ||||
| 	global = global_invalidates(kvm, 0); | ||||
| 	for (i = 0; i < 4 && ret == H_SUCCESS; ) { | ||||
| @ -542,8 +554,7 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 				ret = H_PARAMETER; | ||||
| 				break; | ||||
| 			} | ||||
| 			hp = (unsigned long *) | ||||
| 				(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 			hp = (__be64 *) (kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 			/* to avoid deadlock, don't spin except for first */ | ||||
| 			if (!try_lock_hpte(hp, HPTE_V_HVLOCK)) { | ||||
| 				if (n) | ||||
| @ -552,23 +563,24 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 					cpu_relax(); | ||||
| 			} | ||||
| 			found = 0; | ||||
| 			if (hp[0] & (HPTE_V_ABSENT | HPTE_V_VALID)) { | ||||
| 			hp0 = be64_to_cpu(hp[0]); | ||||
| 			if (hp0 & (HPTE_V_ABSENT | HPTE_V_VALID)) { | ||||
| 				switch (flags & 3) { | ||||
| 				case 0:		/* absolute */ | ||||
| 					found = 1; | ||||
| 					break; | ||||
| 				case 1:		/* andcond */ | ||||
| 					if (!(hp[0] & args[j + 1])) | ||||
| 					if (!(hp0 & args[j + 1])) | ||||
| 						found = 1; | ||||
| 					break; | ||||
| 				case 2:		/* AVPN */ | ||||
| 					if ((hp[0] & ~0x7fUL) == args[j + 1]) | ||||
| 					if ((hp0 & ~0x7fUL) == args[j + 1]) | ||||
| 						found = 1; | ||||
| 					break; | ||||
| 				} | ||||
| 			} | ||||
| 			if (!found) { | ||||
| 				hp[0] &= ~HPTE_V_HVLOCK; | ||||
| 				hp[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 				args[j] = ((0x90 | flags) << 56) + pte_index; | ||||
| 				continue; | ||||
| 			} | ||||
| @ -577,7 +589,7 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 			rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]); | ||||
| 			note_hpte_modification(kvm, rev); | ||||
| 
 | ||||
| 			if (!(hp[0] & HPTE_V_VALID)) { | ||||
| 			if (!(hp0 & HPTE_V_VALID)) { | ||||
| 				/* insert R and C bits from PTE */ | ||||
| 				rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C); | ||||
| 				args[j] |= rcbits << (56 - 5); | ||||
| @ -585,8 +597,10 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 				continue; | ||||
| 			} | ||||
| 
 | ||||
| 			hp[0] &= ~HPTE_V_VALID;		/* leave it locked */ | ||||
| 			tlbrb[n] = compute_tlbie_rb(hp[0], hp[1], pte_index); | ||||
| 			/* leave it locked */ | ||||
| 			hp[0] &= ~cpu_to_be64(HPTE_V_VALID); | ||||
| 			tlbrb[n] = compute_tlbie_rb(be64_to_cpu(hp[0]), | ||||
| 				be64_to_cpu(hp[1]), pte_index); | ||||
| 			indexes[n] = j; | ||||
| 			hptes[n] = hp; | ||||
| 			revs[n] = rev; | ||||
| @ -605,7 +619,8 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 			pte_index = args[j] & ((1ul << 56) - 1); | ||||
| 			hp = hptes[k]; | ||||
| 			rev = revs[k]; | ||||
| 			remove_revmap_chain(kvm, pte_index, rev, hp[0], hp[1]); | ||||
| 			remove_revmap_chain(kvm, pte_index, rev, | ||||
| 				be64_to_cpu(hp[0]), be64_to_cpu(hp[1])); | ||||
| 			rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C); | ||||
| 			args[j] |= rcbits << (56 - 5); | ||||
| 			hp[0] = 0; | ||||
| @ -620,23 +635,25 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 		      unsigned long va) | ||||
| { | ||||
| 	struct kvm *kvm = vcpu->kvm; | ||||
| 	unsigned long *hpte; | ||||
| 	__be64 *hpte; | ||||
| 	struct revmap_entry *rev; | ||||
| 	unsigned long v, r, rb, mask, bits; | ||||
| 	u64 pte; | ||||
| 
 | ||||
| 	if (pte_index >= kvm->arch.hpt_npte) | ||||
| 		return H_PARAMETER; | ||||
| 
 | ||||
| 	hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 	hpte = (__be64 *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 	while (!try_lock_hpte(hpte, HPTE_V_HVLOCK)) | ||||
| 		cpu_relax(); | ||||
| 	if ((hpte[0] & (HPTE_V_ABSENT | HPTE_V_VALID)) == 0 || | ||||
| 	    ((flags & H_AVPN) && (hpte[0] & ~0x7fUL) != avpn)) { | ||||
| 		hpte[0] &= ~HPTE_V_HVLOCK; | ||||
| 	pte = be64_to_cpu(hpte[0]); | ||||
| 	if ((pte & (HPTE_V_ABSENT | HPTE_V_VALID)) == 0 || | ||||
| 	    ((flags & H_AVPN) && (pte & ~0x7fUL) != avpn)) { | ||||
| 		hpte[0] &= ~cpu_to_be64(HPTE_V_HVLOCK); | ||||
| 		return H_NOT_FOUND; | ||||
| 	} | ||||
| 
 | ||||
| 	v = hpte[0]; | ||||
| 	v = pte; | ||||
| 	bits = (flags << 55) & HPTE_R_PP0; | ||||
| 	bits |= (flags << 48) & HPTE_R_KEY_HI; | ||||
| 	bits |= flags & (HPTE_R_PP | HPTE_R_N | HPTE_R_KEY_LO); | ||||
| @ -650,12 +667,12 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 		rev->guest_rpte = r; | ||||
| 		note_hpte_modification(kvm, rev); | ||||
| 	} | ||||
| 	r = (hpte[1] & ~mask) | bits; | ||||
| 	r = (be64_to_cpu(hpte[1]) & ~mask) | bits; | ||||
| 
 | ||||
| 	/* Update HPTE */ | ||||
| 	if (v & HPTE_V_VALID) { | ||||
| 		rb = compute_tlbie_rb(v, r, pte_index); | ||||
| 		hpte[0] = v & ~HPTE_V_VALID; | ||||
| 		hpte[0] = cpu_to_be64(v & ~HPTE_V_VALID); | ||||
| 		do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags), true); | ||||
| 		/*
 | ||||
| 		 * If the host has this page as readonly but the guest | ||||
| @ -681,9 +698,9 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 			} | ||||
| 		} | ||||
| 	} | ||||
| 	hpte[1] = r; | ||||
| 	hpte[1] = cpu_to_be64(r); | ||||
| 	eieio(); | ||||
| 	hpte[0] = v & ~HPTE_V_HVLOCK; | ||||
| 	hpte[0] = cpu_to_be64(v & ~HPTE_V_HVLOCK); | ||||
| 	asm volatile("ptesync" : : : "memory"); | ||||
| 	return H_SUCCESS; | ||||
| } | ||||
| @ -692,7 +709,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 		   unsigned long pte_index) | ||||
| { | ||||
| 	struct kvm *kvm = vcpu->kvm; | ||||
| 	unsigned long *hpte, v, r; | ||||
| 	__be64 *hpte; | ||||
| 	unsigned long v, r; | ||||
| 	int i, n = 1; | ||||
| 	struct revmap_entry *rev = NULL; | ||||
| 
 | ||||
| @ -704,9 +722,9 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 	} | ||||
| 	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]); | ||||
| 	for (i = 0; i < n; ++i, ++pte_index) { | ||||
| 		hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		v = hpte[0] & ~HPTE_V_HVLOCK; | ||||
| 		r = hpte[1]; | ||||
| 		hpte = (__be64 *)(kvm->arch.hpt_virt + (pte_index << 4)); | ||||
| 		v = be64_to_cpu(hpte[0]) & ~HPTE_V_HVLOCK; | ||||
| 		r = be64_to_cpu(hpte[1]); | ||||
| 		if (v & HPTE_V_ABSENT) { | ||||
| 			v &= ~HPTE_V_ABSENT; | ||||
| 			v |= HPTE_V_VALID; | ||||
| @ -721,25 +739,27 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, | ||||
| 	return H_SUCCESS; | ||||
| } | ||||
| 
 | ||||
| void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep, | ||||
| void kvmppc_invalidate_hpte(struct kvm *kvm, __be64 *hptep, | ||||
| 			unsigned long pte_index) | ||||
| { | ||||
| 	unsigned long rb; | ||||
| 
 | ||||
| 	hptep[0] &= ~HPTE_V_VALID; | ||||
| 	rb = compute_tlbie_rb(hptep[0], hptep[1], pte_index); | ||||
| 	hptep[0] &= ~cpu_to_be64(HPTE_V_VALID); | ||||
| 	rb = compute_tlbie_rb(be64_to_cpu(hptep[0]), be64_to_cpu(hptep[1]), | ||||
| 			      pte_index); | ||||
| 	do_tlbies(kvm, &rb, 1, 1, true); | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_invalidate_hpte); | ||||
| 
 | ||||
| void kvmppc_clear_ref_hpte(struct kvm *kvm, unsigned long *hptep, | ||||
| void kvmppc_clear_ref_hpte(struct kvm *kvm, __be64 *hptep, | ||||
| 			   unsigned long pte_index) | ||||
| { | ||||
| 	unsigned long rb; | ||||
| 	unsigned char rbyte; | ||||
| 
 | ||||
| 	rb = compute_tlbie_rb(hptep[0], hptep[1], pte_index); | ||||
| 	rbyte = (hptep[1] & ~HPTE_R_R) >> 8; | ||||
| 	rb = compute_tlbie_rb(be64_to_cpu(hptep[0]), be64_to_cpu(hptep[1]), | ||||
| 			      pte_index); | ||||
| 	rbyte = (be64_to_cpu(hptep[1]) & ~HPTE_R_R) >> 8; | ||||
| 	/* modify only the second-last byte, which contains the ref bit */ | ||||
| 	*((char *)hptep + 14) = rbyte; | ||||
| 	do_tlbies(kvm, &rb, 1, 1, false); | ||||
| @ -765,7 +785,7 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, | ||||
| 	unsigned long somask; | ||||
| 	unsigned long vsid, hash; | ||||
| 	unsigned long avpn; | ||||
| 	unsigned long *hpte; | ||||
| 	__be64 *hpte; | ||||
| 	unsigned long mask, val; | ||||
| 	unsigned long v, r; | ||||
| 
 | ||||
| @ -797,11 +817,11 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, | ||||
| 	val |= avpn; | ||||
| 
 | ||||
| 	for (;;) { | ||||
| 		hpte = (unsigned long *)(kvm->arch.hpt_virt + (hash << 7)); | ||||
| 		hpte = (__be64 *)(kvm->arch.hpt_virt + (hash << 7)); | ||||
| 
 | ||||
| 		for (i = 0; i < 16; i += 2) { | ||||
| 			/* Read the PTE racily */ | ||||
| 			v = hpte[i] & ~HPTE_V_HVLOCK; | ||||
| 			v = be64_to_cpu(hpte[i]) & ~HPTE_V_HVLOCK; | ||||
| 
 | ||||
| 			/* Check valid/absent, hash, segment size and AVPN */ | ||||
| 			if (!(v & valid) || (v & mask) != val) | ||||
| @ -810,8 +830,8 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, | ||||
| 			/* Lock the PTE and read it under the lock */ | ||||
| 			while (!try_lock_hpte(&hpte[i], HPTE_V_HVLOCK)) | ||||
| 				cpu_relax(); | ||||
| 			v = hpte[i] & ~HPTE_V_HVLOCK; | ||||
| 			r = hpte[i+1]; | ||||
| 			v = be64_to_cpu(hpte[i]) & ~HPTE_V_HVLOCK; | ||||
| 			r = be64_to_cpu(hpte[i+1]); | ||||
| 
 | ||||
| 			/*
 | ||||
| 			 * Check the HPTE again, including base page size | ||||
| @ -822,7 +842,7 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, | ||||
| 				return (hash << 3) + (i >> 1); | ||||
| 
 | ||||
| 			/* Unlock and move on */ | ||||
| 			hpte[i] = v; | ||||
| 			hpte[i] = cpu_to_be64(v); | ||||
| 		} | ||||
| 
 | ||||
| 		if (val & HPTE_V_SECONDARY) | ||||
| @ -851,7 +871,7 @@ long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr, | ||||
| 	struct kvm *kvm = vcpu->kvm; | ||||
| 	long int index; | ||||
| 	unsigned long v, r, gr; | ||||
| 	unsigned long *hpte; | ||||
| 	__be64 *hpte; | ||||
| 	unsigned long valid; | ||||
| 	struct revmap_entry *rev; | ||||
| 	unsigned long pp, key; | ||||
| @ -867,9 +887,9 @@ long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr, | ||||
| 			return status;	/* there really was no HPTE */ | ||||
| 		return 0;		/* for prot fault, HPTE disappeared */ | ||||
| 	} | ||||
| 	hpte = (unsigned long *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	v = hpte[0] & ~HPTE_V_HVLOCK; | ||||
| 	r = hpte[1]; | ||||
| 	hpte = (__be64 *)(kvm->arch.hpt_virt + (index << 4)); | ||||
| 	v = be64_to_cpu(hpte[0]) & ~HPTE_V_HVLOCK; | ||||
| 	r = be64_to_cpu(hpte[1]); | ||||
| 	rev = real_vmalloc_addr(&kvm->arch.revmap[index]); | ||||
| 	gr = rev->guest_rpte; | ||||
| 
 | ||||
|  | ||||
| @ -32,10 +32,6 @@ | ||||
| 
 | ||||
| #define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM) | ||||
| 
 | ||||
| #ifdef __LITTLE_ENDIAN__ | ||||
| #error Need to fix lppaca and SLB shadow accesses in little endian mode | ||||
| #endif | ||||
| 
 | ||||
| /* Values in HSTATE_NAPPING(r13) */ | ||||
| #define NAPPING_CEDE	1 | ||||
| #define NAPPING_NOVCPU	2 | ||||
| @ -595,9 +591,10 @@ kvmppc_got_guest: | ||||
| 	ld	r3, VCPU_VPA(r4) | ||||
| 	cmpdi	r3, 0 | ||||
| 	beq	25f | ||||
| 	lwz	r5, LPPACA_YIELDCOUNT(r3) | ||||
| 	li	r6, LPPACA_YIELDCOUNT | ||||
| 	LWZX_BE	r5, r3, r6 | ||||
| 	addi	r5, r5, 1 | ||||
| 	stw	r5, LPPACA_YIELDCOUNT(r3) | ||||
| 	STWX_BE	r5, r3, r6 | ||||
| 	li	r6, 1 | ||||
| 	stb	r6, VCPU_VPA_DIRTY(r4) | ||||
| 25: | ||||
| @ -671,9 +668,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) | ||||
| 
 | ||||
| 	mr	r31, r4 | ||||
| 	addi	r3, r31, VCPU_FPRS_TM | ||||
| 	bl	.load_fp_state | ||||
| 	bl	load_fp_state | ||||
| 	addi	r3, r31, VCPU_VRS_TM | ||||
| 	bl	.load_vr_state | ||||
| 	bl	load_vr_state | ||||
| 	mr	r4, r31 | ||||
| 	lwz	r7, VCPU_VRSAVE_TM(r4) | ||||
| 	mtspr	SPRN_VRSAVE, r7 | ||||
| @ -1417,9 +1414,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) | ||||
| 
 | ||||
| 	/* Save FP/VSX. */ | ||||
| 	addi	r3, r9, VCPU_FPRS_TM | ||||
| 	bl	.store_fp_state | ||||
| 	bl	store_fp_state | ||||
| 	addi	r3, r9, VCPU_VRS_TM | ||||
| 	bl	.store_vr_state | ||||
| 	bl	store_vr_state | ||||
| 	mfspr	r6, SPRN_VRSAVE | ||||
| 	stw	r6, VCPU_VRSAVE_TM(r9) | ||||
| 1: | ||||
| @ -1442,9 +1439,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) | ||||
| 	ld	r8, VCPU_VPA(r9)	/* do they have a VPA? */ | ||||
| 	cmpdi	r8, 0 | ||||
| 	beq	25f | ||||
| 	lwz	r3, LPPACA_YIELDCOUNT(r8) | ||||
| 	li	r4, LPPACA_YIELDCOUNT | ||||
| 	LWZX_BE	r3, r8, r4 | ||||
| 	addi	r3, r3, 1 | ||||
| 	stw	r3, LPPACA_YIELDCOUNT(r8) | ||||
| 	STWX_BE	r3, r8, r4 | ||||
| 	li	r3, 1 | ||||
| 	stb	r3, VCPU_VPA_DIRTY(r9) | ||||
| 25: | ||||
| @ -1757,8 +1755,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) | ||||
| 33:	ld	r8,PACA_SLBSHADOWPTR(r13) | ||||
| 
 | ||||
| 	.rept	SLB_NUM_BOLTED
 | ||||
| 	ld	r5,SLBSHADOW_SAVEAREA(r8) | ||||
| 	ld	r6,SLBSHADOW_SAVEAREA+8(r8) | ||||
| 	li	r3, SLBSHADOW_SAVEAREA | ||||
| 	LDX_BE	r5, r8, r3 | ||||
| 	addi	r3, r3, 8 | ||||
| 	LDX_BE	r6, r8, r3 | ||||
| 	andis.	r7,r5,SLB_ESID_V@h
 | ||||
| 	beq	1f | ||||
| 	slbmte	r6,r5 | ||||
| @ -1909,12 +1909,23 @@ hcall_try_real_mode: | ||||
| 	clrrdi	r3,r3,2 | ||||
| 	cmpldi	r3,hcall_real_table_end - hcall_real_table | ||||
| 	bge	guest_exit_cont | ||||
| 	/* See if this hcall is enabled for in-kernel handling */ | ||||
| 	ld	r4, VCPU_KVM(r9) | ||||
| 	srdi	r0, r3, 8	/* r0 = (r3 / 4) >> 6 */ | ||||
| 	sldi	r0, r0, 3	/* index into kvm->arch.enabled_hcalls[] */ | ||||
| 	add	r4, r4, r0 | ||||
| 	ld	r0, KVM_ENABLED_HCALLS(r4) | ||||
| 	rlwinm	r4, r3, 32-2, 0x3f	/* r4 = (r3 / 4) & 0x3f */ | ||||
| 	srd	r0, r0, r4 | ||||
| 	andi.	r0, r0, 1 | ||||
| 	beq	guest_exit_cont | ||||
| 	/* Get pointer to handler, if any, and call it */ | ||||
| 	LOAD_REG_ADDR(r4, hcall_real_table) | ||||
| 	lwax	r3,r3,r4 | ||||
| 	cmpwi	r3,0 | ||||
| 	beq	guest_exit_cont | ||||
| 	add	r3,r3,r4 | ||||
| 	mtctr	r3 | ||||
| 	add	r12,r3,r4 | ||||
| 	mtctr	r12 | ||||
| 	mr	r3,r9		/* get vcpu pointer */ | ||||
| 	ld	r4,VCPU_GPR(R4)(r9) | ||||
| 	bctrl | ||||
| @ -2031,6 +2042,7 @@ hcall_real_table: | ||||
| 	.long	0		/* 0x12c */ | ||||
| 	.long	0		/* 0x130 */ | ||||
| 	.long	DOTSYM(kvmppc_h_set_xdabr) - hcall_real_table | ||||
| 	.globl	hcall_real_table_end
 | ||||
| hcall_real_table_end: | ||||
| 
 | ||||
| ignore_hdec: | ||||
| @ -2338,7 +2350,18 @@ kvmppc_read_intr: | ||||
| 	cmpdi	r6, 0 | ||||
| 	beq-	1f | ||||
| 	lwzcix	r0, r6, r7 | ||||
| 	rlwinm.	r3, r0, 0, 0xffffff | ||||
| 	/* | ||||
| 	 * Save XIRR for later. Since we get in in reverse endian on LE | ||||
| 	 * systems, save it byte reversed and fetch it back in host endian. | ||||
| 	 */ | ||||
| 	li	r3, HSTATE_SAVED_XIRR | ||||
| 	STWX_BE	r0, r3, r13 | ||||
| #ifdef __LITTLE_ENDIAN__ | ||||
| 	lwz	r3, HSTATE_SAVED_XIRR(r13) | ||||
| #else | ||||
| 	mr	r3, r0 | ||||
| #endif | ||||
| 	rlwinm.	r3, r3, 0, 0xffffff | ||||
| 	sync | ||||
| 	beq	1f			/* if nothing pending in the ICP */ | ||||
| 
 | ||||
| @ -2370,10 +2393,9 @@ kvmppc_read_intr: | ||||
| 	li	r3, -1 | ||||
| 1:	blr | ||||
| 
 | ||||
| 42:	/* It's not an IPI and it's for the host, stash it in the PACA | ||||
| 	 * before exit, it will be picked up by the host ICP driver | ||||
| 42:	/* It's not an IPI and it's for the host. We saved a copy of XIRR in | ||||
| 	 * the PACA earlier, it will be picked up by the host ICP driver | ||||
| 	 */ | ||||
| 	stw	r0, HSTATE_SAVED_XIRR(r13) | ||||
| 	li	r3, 1 | ||||
| 	b	1b | ||||
| 
 | ||||
| @ -2408,11 +2430,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) | ||||
| 	mtmsrd	r8 | ||||
| 	isync | ||||
| 	addi	r3,r3,VCPU_FPRS | ||||
| 	bl	.store_fp_state | ||||
| 	bl	store_fp_state | ||||
| #ifdef CONFIG_ALTIVEC | ||||
| BEGIN_FTR_SECTION | ||||
| 	addi	r3,r31,VCPU_VRS | ||||
| 	bl	.store_vr_state | ||||
| 	bl	store_vr_state | ||||
| END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) | ||||
| #endif | ||||
| 	mfspr	r6,SPRN_VRSAVE | ||||
| @ -2444,11 +2466,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) | ||||
| 	mtmsrd	r8 | ||||
| 	isync | ||||
| 	addi	r3,r4,VCPU_FPRS | ||||
| 	bl	.load_fp_state | ||||
| 	bl	load_fp_state | ||||
| #ifdef CONFIG_ALTIVEC | ||||
| BEGIN_FTR_SECTION | ||||
| 	addi	r3,r31,VCPU_VRS | ||||
| 	bl	.load_vr_state | ||||
| 	bl	load_vr_state | ||||
| END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) | ||||
| #endif | ||||
| 	lwz	r7,VCPU_VRSAVE(r31) | ||||
|  | ||||
| @ -639,26 +639,36 @@ static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc, | ||||
| 
 | ||||
| int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	u32 inst = kvmppc_get_last_inst(vcpu); | ||||
| 	u32 inst; | ||||
| 	enum emulation_result emulated = EMULATE_DONE; | ||||
| 	int ax_rd, ax_ra, ax_rb, ax_rc; | ||||
| 	short full_d; | ||||
| 	u64 *fpr_d, *fpr_a, *fpr_b, *fpr_c; | ||||
| 
 | ||||
| 	int ax_rd = inst_get_field(inst, 6, 10); | ||||
| 	int ax_ra = inst_get_field(inst, 11, 15); | ||||
| 	int ax_rb = inst_get_field(inst, 16, 20); | ||||
| 	int ax_rc = inst_get_field(inst, 21, 25); | ||||
| 	short full_d = inst_get_field(inst, 16, 31); | ||||
| 
 | ||||
| 	u64 *fpr_d = &VCPU_FPR(vcpu, ax_rd); | ||||
| 	u64 *fpr_a = &VCPU_FPR(vcpu, ax_ra); | ||||
| 	u64 *fpr_b = &VCPU_FPR(vcpu, ax_rb); | ||||
| 	u64 *fpr_c = &VCPU_FPR(vcpu, ax_rc); | ||||
| 
 | ||||
| 	bool rcomp = (inst & 1) ? true : false; | ||||
| 	u32 cr = kvmppc_get_cr(vcpu); | ||||
| 	bool rcomp; | ||||
| 	u32 cr; | ||||
| #ifdef DEBUG | ||||
| 	int i; | ||||
| #endif | ||||
| 
 | ||||
| 	emulated = kvmppc_get_last_inst(vcpu, INST_GENERIC, &inst); | ||||
| 	if (emulated != EMULATE_DONE) | ||||
| 		return emulated; | ||||
| 
 | ||||
| 	ax_rd = inst_get_field(inst, 6, 10); | ||||
| 	ax_ra = inst_get_field(inst, 11, 15); | ||||
| 	ax_rb = inst_get_field(inst, 16, 20); | ||||
| 	ax_rc = inst_get_field(inst, 21, 25); | ||||
| 	full_d = inst_get_field(inst, 16, 31); | ||||
| 
 | ||||
| 	fpr_d = &VCPU_FPR(vcpu, ax_rd); | ||||
| 	fpr_a = &VCPU_FPR(vcpu, ax_ra); | ||||
| 	fpr_b = &VCPU_FPR(vcpu, ax_rb); | ||||
| 	fpr_c = &VCPU_FPR(vcpu, ax_rc); | ||||
| 
 | ||||
| 	rcomp = (inst & 1) ? true : false; | ||||
| 	cr = kvmppc_get_cr(vcpu); | ||||
| 
 | ||||
| 	if (!kvmppc_inst_is_paired_single(vcpu, inst)) | ||||
| 		return EMULATE_FAIL; | ||||
| 
 | ||||
|  | ||||
| @ -62,6 +62,35 @@ static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac); | ||||
| #define HW_PAGE_SIZE PAGE_SIZE | ||||
| #endif | ||||
| 
 | ||||
| static bool kvmppc_is_split_real(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	ulong msr = kvmppc_get_msr(vcpu); | ||||
| 	return (msr & (MSR_IR|MSR_DR)) == MSR_DR; | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_fixup_split_real(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	ulong msr = kvmppc_get_msr(vcpu); | ||||
| 	ulong pc = kvmppc_get_pc(vcpu); | ||||
| 
 | ||||
| 	/* We are in DR only split real mode */ | ||||
| 	if ((msr & (MSR_IR|MSR_DR)) != MSR_DR) | ||||
| 		return; | ||||
| 
 | ||||
| 	/* We have not fixed up the guest already */ | ||||
| 	if (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) | ||||
| 		return; | ||||
| 
 | ||||
| 	/* The code is in fixupable address space */ | ||||
| 	if (pc & SPLIT_HACK_MASK) | ||||
| 		return; | ||||
| 
 | ||||
| 	vcpu->arch.hflags |= BOOK3S_HFLAG_SPLIT_HACK; | ||||
| 	kvmppc_set_pc(vcpu, pc | SPLIT_HACK_OFFS); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_unfixup_split_real(struct kvm_vcpu *vcpu); | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu) | ||||
| { | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| @ -71,10 +100,19 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu) | ||||
| 	svcpu->in_use = 0; | ||||
| 	svcpu_put(svcpu); | ||||
| #endif | ||||
| 
 | ||||
| 	/* Disable AIL if supported */ | ||||
| 	if (cpu_has_feature(CPU_FTR_HVMODE) && | ||||
| 	    cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~LPCR_AIL); | ||||
| 
 | ||||
| 	vcpu->cpu = smp_processor_id(); | ||||
| #ifdef CONFIG_PPC_BOOK3S_32 | ||||
| 	current->thread.kvm_shadow_vcpu = vcpu->arch.shadow_vcpu; | ||||
| #endif | ||||
| 
 | ||||
| 	if (kvmppc_is_split_real(vcpu)) | ||||
| 		kvmppc_fixup_split_real(vcpu); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu) | ||||
| @ -89,8 +127,17 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu) | ||||
| 	svcpu_put(svcpu); | ||||
| #endif | ||||
| 
 | ||||
| 	if (kvmppc_is_split_real(vcpu)) | ||||
| 		kvmppc_unfixup_split_real(vcpu); | ||||
| 
 | ||||
| 	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX); | ||||
| 	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); | ||||
| 
 | ||||
| 	/* Enable AIL if supported */ | ||||
| 	if (cpu_has_feature(CPU_FTR_HVMODE) && | ||||
| 	    cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) | LPCR_AIL_3); | ||||
| 
 | ||||
| 	vcpu->cpu = -1; | ||||
| } | ||||
| 
 | ||||
| @ -120,6 +167,14 @@ void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu, | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	svcpu->shadow_fscr = vcpu->arch.shadow_fscr; | ||||
| #endif | ||||
| 	/*
 | ||||
| 	 * Now also save the current time base value. We use this | ||||
| 	 * to find the guest purr and spurr value. | ||||
| 	 */ | ||||
| 	vcpu->arch.entry_tb = get_tb(); | ||||
| 	vcpu->arch.entry_vtb = get_vtb(); | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		vcpu->arch.entry_ic = mfspr(SPRN_IC); | ||||
| 	svcpu->in_use = true; | ||||
| } | ||||
| 
 | ||||
| @ -166,6 +221,14 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	vcpu->arch.shadow_fscr = svcpu->shadow_fscr; | ||||
| #endif | ||||
| 	/*
 | ||||
| 	 * Update purr and spurr using time base on exit. | ||||
| 	 */ | ||||
| 	vcpu->arch.purr += get_tb() - vcpu->arch.entry_tb; | ||||
| 	vcpu->arch.spurr += get_tb() - vcpu->arch.entry_tb; | ||||
| 	vcpu->arch.vtb += get_vtb() - vcpu->arch.entry_vtb; | ||||
| 	if (cpu_has_feature(CPU_FTR_ARCH_207S)) | ||||
| 		vcpu->arch.ic += mfspr(SPRN_IC) - vcpu->arch.entry_ic; | ||||
| 	svcpu->in_use = false; | ||||
| 
 | ||||
| out: | ||||
| @ -294,6 +357,11 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) | ||||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	if (kvmppc_is_split_real(vcpu)) | ||||
| 		kvmppc_fixup_split_real(vcpu); | ||||
| 	else | ||||
| 		kvmppc_unfixup_split_real(vcpu); | ||||
| 
 | ||||
| 	if ((kvmppc_get_msr(vcpu) & (MSR_PR|MSR_IR|MSR_DR)) != | ||||
| 		   (old_msr & (MSR_PR|MSR_IR|MSR_DR))) { | ||||
| 		kvmppc_mmu_flush_segments(vcpu); | ||||
| @ -443,19 +511,19 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) | ||||
| 	put_page(hpage); | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) | ||||
| static int kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) | ||||
| { | ||||
| 	ulong mp_pa = vcpu->arch.magic_page_pa; | ||||
| 
 | ||||
| 	if (!(kvmppc_get_msr(vcpu) & MSR_SF)) | ||||
| 		mp_pa = (uint32_t)mp_pa; | ||||
| 
 | ||||
| 	if (unlikely(mp_pa) && | ||||
| 	    unlikely((mp_pa & KVM_PAM) >> PAGE_SHIFT == gfn)) { | ||||
| 	gpa &= ~0xFFFULL; | ||||
| 	if (unlikely(mp_pa) && unlikely((mp_pa & KVM_PAM) == (gpa & KVM_PAM))) { | ||||
| 		return 1; | ||||
| 	} | ||||
| 
 | ||||
| 	return kvm_is_visible_gfn(vcpu->kvm, gfn); | ||||
| 	return kvm_is_visible_gfn(vcpu->kvm, gpa >> PAGE_SHIFT); | ||||
| } | ||||
| 
 | ||||
| int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| @ -494,6 +562,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 		pte.vpage |= ((u64)VSID_REAL << (SID_SHIFT - 12)); | ||||
| 		break; | ||||
| 	case MSR_DR: | ||||
| 		if (!data && | ||||
| 		    (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) && | ||||
| 		    ((pte.raddr & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)) | ||||
| 			pte.raddr &= ~SPLIT_HACK_MASK; | ||||
| 		/* fall through */ | ||||
| 	case MSR_IR: | ||||
| 		vcpu->arch.mmu.esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid); | ||||
| 
 | ||||
| @ -541,7 +614,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 		kvmppc_set_dar(vcpu, kvmppc_get_fault_dar(vcpu)); | ||||
| 		kvmppc_book3s_queue_irqprio(vcpu, vec + 0x80); | ||||
| 	} else if (!is_mmio && | ||||
| 		   kvmppc_visible_gfn(vcpu, pte.raddr >> PAGE_SHIFT)) { | ||||
| 		   kvmppc_visible_gpa(vcpu, pte.raddr)) { | ||||
| 		if (data && !(vcpu->arch.fault_dsisr & DSISR_NOHPTE)) { | ||||
| 			/*
 | ||||
| 			 * There is already a host HPTE there, presumably | ||||
| @ -637,42 +710,6 @@ static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac) | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_read_inst(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	ulong srr0 = kvmppc_get_pc(vcpu); | ||||
| 	u32 last_inst = kvmppc_get_last_inst(vcpu); | ||||
| 	int ret; | ||||
| 
 | ||||
| 	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false); | ||||
| 	if (ret == -ENOENT) { | ||||
| 		ulong msr = kvmppc_get_msr(vcpu); | ||||
| 
 | ||||
| 		msr = kvmppc_set_field(msr, 33, 33, 1); | ||||
| 		msr = kvmppc_set_field(msr, 34, 36, 0); | ||||
| 		msr = kvmppc_set_field(msr, 42, 47, 0); | ||||
| 		kvmppc_set_msr_fast(vcpu, msr); | ||||
| 		kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE); | ||||
| 		return EMULATE_AGAIN; | ||||
| 	} | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr) | ||||
| { | ||||
| 
 | ||||
| 	/* Need to do paired single emulation? */ | ||||
| 	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE)) | ||||
| 		return EMULATE_DONE; | ||||
| 
 | ||||
| 	/* Read out the instruction */ | ||||
| 	if (kvmppc_read_inst(vcpu) == EMULATE_DONE) | ||||
| 		/* Need to emulate */ | ||||
| 		return EMULATE_FAIL; | ||||
| 
 | ||||
| 	return EMULATE_AGAIN; | ||||
| } | ||||
| 
 | ||||
| /* Handle external providers (FPU, Altivec, VSX) */ | ||||
| static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr, | ||||
| 			     ulong msr) | ||||
| @ -834,6 +871,15 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac) | ||||
| 
 | ||||
| 	return RESUME_GUEST; | ||||
| } | ||||
| 
 | ||||
| void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr) | ||||
| { | ||||
| 	if ((vcpu->arch.fscr & FSCR_TAR) && !(fscr & FSCR_TAR)) { | ||||
| 		/* TAR got dropped, drop it in shadow too */ | ||||
| 		kvmppc_giveup_fac(vcpu, FSCR_TAR_LG); | ||||
| 	} | ||||
| 	vcpu->arch.fscr = fscr; | ||||
| } | ||||
| #endif | ||||
| 
 | ||||
| int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| @ -858,6 +904,9 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 		ulong shadow_srr1 = vcpu->arch.shadow_srr1; | ||||
| 		vcpu->stat.pf_instruc++; | ||||
| 
 | ||||
| 		if (kvmppc_is_split_real(vcpu)) | ||||
| 			kvmppc_fixup_split_real(vcpu); | ||||
| 
 | ||||
| #ifdef CONFIG_PPC_BOOK3S_32 | ||||
| 		/* We set segments as unused segments when invalidating them. So
 | ||||
| 		 * treat the respective fault as segment fault. */ | ||||
| @ -960,6 +1009,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	case BOOK3S_INTERRUPT_DECREMENTER: | ||||
| 	case BOOK3S_INTERRUPT_HV_DECREMENTER: | ||||
| 	case BOOK3S_INTERRUPT_DOORBELL: | ||||
| 	case BOOK3S_INTERRUPT_H_DOORBELL: | ||||
| 		vcpu->stat.dec_exits++; | ||||
| 		r = RESUME_GUEST; | ||||
| 		break; | ||||
| @ -977,15 +1027,24 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	{ | ||||
| 		enum emulation_result er; | ||||
| 		ulong flags; | ||||
| 		u32 last_inst; | ||||
| 		int emul; | ||||
| 
 | ||||
| program_interrupt: | ||||
| 		flags = vcpu->arch.shadow_srr1 & 0x1f0000ull; | ||||
| 
 | ||||
| 		emul = kvmppc_get_last_inst(vcpu, INST_GENERIC, &last_inst); | ||||
| 		if (emul != EMULATE_DONE) { | ||||
| 			r = RESUME_GUEST; | ||||
| 			break; | ||||
| 		} | ||||
| 
 | ||||
| 		if (kvmppc_get_msr(vcpu) & MSR_PR) { | ||||
| #ifdef EXIT_DEBUG | ||||
| 			printk(KERN_INFO "Userspace triggered 0x700 exception at 0x%lx (0x%x)\n", kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu)); | ||||
| 			pr_info("Userspace triggered 0x700 exception at\n 0x%lx (0x%x)\n", | ||||
| 				kvmppc_get_pc(vcpu), last_inst); | ||||
| #endif | ||||
| 			if ((kvmppc_get_last_inst(vcpu) & 0xff0007ff) != | ||||
| 			if ((last_inst & 0xff0007ff) != | ||||
| 			    (INS_DCBZ & 0xfffffff7)) { | ||||
| 				kvmppc_core_queue_program(vcpu, flags); | ||||
| 				r = RESUME_GUEST; | ||||
| @ -1004,7 +1063,7 @@ program_interrupt: | ||||
| 			break; | ||||
| 		case EMULATE_FAIL: | ||||
| 			printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n", | ||||
| 			       __func__, kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu)); | ||||
| 			       __func__, kvmppc_get_pc(vcpu), last_inst); | ||||
| 			kvmppc_core_queue_program(vcpu, flags); | ||||
| 			r = RESUME_GUEST; | ||||
| 			break; | ||||
| @ -1021,8 +1080,23 @@ program_interrupt: | ||||
| 		break; | ||||
| 	} | ||||
| 	case BOOK3S_INTERRUPT_SYSCALL: | ||||
| 	{ | ||||
| 		u32 last_sc; | ||||
| 		int emul; | ||||
| 
 | ||||
| 		/* Get last sc for papr */ | ||||
| 		if (vcpu->arch.papr_enabled) { | ||||
| 			/* The sc instuction points SRR0 to the next inst */ | ||||
| 			emul = kvmppc_get_last_inst(vcpu, INST_SC, &last_sc); | ||||
| 			if (emul != EMULATE_DONE) { | ||||
| 				kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) - 4); | ||||
| 				r = RESUME_GUEST; | ||||
| 				break; | ||||
| 			} | ||||
| 		} | ||||
| 
 | ||||
| 		if (vcpu->arch.papr_enabled && | ||||
| 		    (kvmppc_get_last_sc(vcpu) == 0x44000022) && | ||||
| 		    (last_sc == 0x44000022) && | ||||
| 		    !(kvmppc_get_msr(vcpu) & MSR_PR)) { | ||||
| 			/* SC 1 papr hypercalls */ | ||||
| 			ulong cmd = kvmppc_get_gpr(vcpu, 3); | ||||
| @ -1067,36 +1141,51 @@ program_interrupt: | ||||
| 			r = RESUME_GUEST; | ||||
| 		} | ||||
| 		break; | ||||
| 	} | ||||
| 	case BOOK3S_INTERRUPT_FP_UNAVAIL: | ||||
| 	case BOOK3S_INTERRUPT_ALTIVEC: | ||||
| 	case BOOK3S_INTERRUPT_VSX: | ||||
| 	{ | ||||
| 		int ext_msr = 0; | ||||
| 		int emul; | ||||
| 		u32 last_inst; | ||||
| 
 | ||||
| 		if (vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE) { | ||||
| 			/* Do paired single instruction emulation */ | ||||
| 			emul = kvmppc_get_last_inst(vcpu, INST_GENERIC, | ||||
| 						    &last_inst); | ||||
| 			if (emul == EMULATE_DONE) | ||||
| 				goto program_interrupt; | ||||
| 			else | ||||
| 				r = RESUME_GUEST; | ||||
| 
 | ||||
| 			break; | ||||
| 		} | ||||
| 
 | ||||
| 		/* Enable external provider */ | ||||
| 		switch (exit_nr) { | ||||
| 		case BOOK3S_INTERRUPT_FP_UNAVAIL: ext_msr = MSR_FP;  break; | ||||
| 		case BOOK3S_INTERRUPT_ALTIVEC:    ext_msr = MSR_VEC; break; | ||||
| 		case BOOK3S_INTERRUPT_VSX:        ext_msr = MSR_VSX; break; | ||||
| 		case BOOK3S_INTERRUPT_FP_UNAVAIL: | ||||
| 			ext_msr = MSR_FP; | ||||
| 			break; | ||||
| 
 | ||||
| 		case BOOK3S_INTERRUPT_ALTIVEC: | ||||
| 			ext_msr = MSR_VEC; | ||||
| 			break; | ||||
| 
 | ||||
| 		case BOOK3S_INTERRUPT_VSX: | ||||
| 			ext_msr = MSR_VSX; | ||||
| 			break; | ||||
| 		} | ||||
| 
 | ||||
| 		switch (kvmppc_check_ext(vcpu, exit_nr)) { | ||||
| 		case EMULATE_DONE: | ||||
| 			/* everything ok - let's enable the ext */ | ||||
| 			r = kvmppc_handle_ext(vcpu, exit_nr, ext_msr); | ||||
| 			break; | ||||
| 		case EMULATE_FAIL: | ||||
| 			/* we need to emulate this instruction */ | ||||
| 			goto program_interrupt; | ||||
| 			break; | ||||
| 		default: | ||||
| 			/* nothing to worry about - go again */ | ||||
| 			break; | ||||
| 		} | ||||
| 		r = kvmppc_handle_ext(vcpu, exit_nr, ext_msr); | ||||
| 		break; | ||||
| 	} | ||||
| 	case BOOK3S_INTERRUPT_ALIGNMENT: | ||||
| 		if (kvmppc_read_inst(vcpu) == EMULATE_DONE) { | ||||
| 			u32 last_inst = kvmppc_get_last_inst(vcpu); | ||||
| 	{ | ||||
| 		u32 last_inst; | ||||
| 		int emul = kvmppc_get_last_inst(vcpu, INST_GENERIC, &last_inst); | ||||
| 
 | ||||
| 		if (emul == EMULATE_DONE) { | ||||
| 			u32 dsisr; | ||||
| 			u64 dar; | ||||
| 
 | ||||
| @ -1110,6 +1199,7 @@ program_interrupt: | ||||
| 		} | ||||
| 		r = RESUME_GUEST; | ||||
| 		break; | ||||
| 	} | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	case BOOK3S_INTERRUPT_FAC_UNAVAIL: | ||||
| 		kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56); | ||||
| @ -1233,6 +1323,7 @@ static int kvmppc_get_one_reg_pr(struct kvm_vcpu *vcpu, u64 id, | ||||
| 		*val = get_reg_val(id, to_book3s(vcpu)->hior); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_LPCR: | ||||
| 	case KVM_REG_PPC_LPCR_64: | ||||
| 		/*
 | ||||
| 		 * We are only interested in the LPCR_ILE bit | ||||
| 		 */ | ||||
| @ -1268,6 +1359,7 @@ static int kvmppc_set_one_reg_pr(struct kvm_vcpu *vcpu, u64 id, | ||||
| 		to_book3s(vcpu)->hior_explicit = true; | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_LPCR: | ||||
| 	case KVM_REG_PPC_LPCR_64: | ||||
| 		kvmppc_set_lpcr_pr(vcpu, set_reg_val(id, *val)); | ||||
| 		break; | ||||
| 	default: | ||||
| @ -1310,8 +1402,7 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_pr(struct kvm *kvm, | ||||
| 	p = __get_free_page(GFP_KERNEL|__GFP_ZERO); | ||||
| 	if (!p) | ||||
| 		goto uninit_vcpu; | ||||
| 	/* the real shared page fills the last 4k of our page */ | ||||
| 	vcpu->arch.shared = (void *)(p + PAGE_SIZE - 4096); | ||||
| 	vcpu->arch.shared = (void *)p; | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	/* Always start the shared struct in native endian mode */ | ||||
| #ifdef __BIG_ENDIAN__ | ||||
| @ -1568,6 +1659,11 @@ static int kvmppc_core_init_vm_pr(struct kvm *kvm) | ||||
| { | ||||
| 	mutex_init(&kvm->arch.hpt_mutex); | ||||
| 
 | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	/* Start out with the default set of hcalls enabled */ | ||||
| 	kvmppc_pr_init_default_hcalls(kvm); | ||||
| #endif | ||||
| 
 | ||||
| 	if (firmware_has_feature(FW_FEATURE_SET_MODE)) { | ||||
| 		spin_lock(&kvm_global_user_count_lock); | ||||
| 		if (++kvm_global_user_count == 1) | ||||
| @ -1636,6 +1732,9 @@ static struct kvmppc_ops kvm_ops_pr = { | ||||
| 	.emulate_mfspr = kvmppc_core_emulate_mfspr_pr, | ||||
| 	.fast_vcpu_kick = kvm_vcpu_kick, | ||||
| 	.arch_vm_ioctl  = kvm_arch_vm_ioctl_pr, | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	.hcall_implemented = kvmppc_hcall_impl_pr, | ||||
| #endif | ||||
| }; | ||||
| 
 | ||||
| 
 | ||||
|  | ||||
| @ -40,8 +40,9 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	long flags = kvmppc_get_gpr(vcpu, 4); | ||||
| 	long pte_index = kvmppc_get_gpr(vcpu, 5); | ||||
| 	unsigned long pteg[2 * 8]; | ||||
| 	unsigned long pteg_addr, i, *hpte; | ||||
| 	__be64 pteg[2 * 8]; | ||||
| 	__be64 *hpte; | ||||
| 	unsigned long pteg_addr, i; | ||||
| 	long int ret; | ||||
| 
 | ||||
| 	i = pte_index & 7; | ||||
| @ -93,8 +94,8 @@ static int kvmppc_h_pr_remove(struct kvm_vcpu *vcpu) | ||||
| 	pteg = get_pteg_addr(vcpu, pte_index); | ||||
| 	mutex_lock(&vcpu->kvm->arch.hpt_mutex); | ||||
| 	copy_from_user(pte, (void __user *)pteg, sizeof(pte)); | ||||
| 	pte[0] = be64_to_cpu(pte[0]); | ||||
| 	pte[1] = be64_to_cpu(pte[1]); | ||||
| 	pte[0] = be64_to_cpu((__force __be64)pte[0]); | ||||
| 	pte[1] = be64_to_cpu((__force __be64)pte[1]); | ||||
| 
 | ||||
| 	ret = H_NOT_FOUND; | ||||
| 	if ((pte[0] & HPTE_V_VALID) == 0 || | ||||
| @ -171,8 +172,8 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| 		pteg = get_pteg_addr(vcpu, tsh & H_BULK_REMOVE_PTEX); | ||||
| 		copy_from_user(pte, (void __user *)pteg, sizeof(pte)); | ||||
| 		pte[0] = be64_to_cpu(pte[0]); | ||||
| 		pte[1] = be64_to_cpu(pte[1]); | ||||
| 		pte[0] = be64_to_cpu((__force __be64)pte[0]); | ||||
| 		pte[1] = be64_to_cpu((__force __be64)pte[1]); | ||||
| 
 | ||||
| 		/* tsl = AVPN */ | ||||
| 		flags = (tsh & H_BULK_REMOVE_FLAGS) >> 26; | ||||
| @ -211,8 +212,8 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu) | ||||
| 	pteg = get_pteg_addr(vcpu, pte_index); | ||||
| 	mutex_lock(&vcpu->kvm->arch.hpt_mutex); | ||||
| 	copy_from_user(pte, (void __user *)pteg, sizeof(pte)); | ||||
| 	pte[0] = be64_to_cpu(pte[0]); | ||||
| 	pte[1] = be64_to_cpu(pte[1]); | ||||
| 	pte[0] = be64_to_cpu((__force __be64)pte[0]); | ||||
| 	pte[1] = be64_to_cpu((__force __be64)pte[1]); | ||||
| 
 | ||||
| 	ret = H_NOT_FOUND; | ||||
| 	if ((pte[0] & HPTE_V_VALID) == 0 || | ||||
| @ -231,8 +232,8 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| 	rb = compute_tlbie_rb(v, r, pte_index); | ||||
| 	vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false); | ||||
| 	pte[0] = cpu_to_be64(pte[0]); | ||||
| 	pte[1] = cpu_to_be64(pte[1]); | ||||
| 	pte[0] = (__force u64)cpu_to_be64(pte[0]); | ||||
| 	pte[1] = (__force u64)cpu_to_be64(pte[1]); | ||||
| 	copy_to_user((void __user *)pteg, pte, sizeof(pte)); | ||||
| 	ret = H_SUCCESS; | ||||
| 
 | ||||
| @ -266,6 +267,12 @@ static int kvmppc_h_pr_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd) | ||||
| 
 | ||||
| int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) | ||||
| { | ||||
| 	int rc, idx; | ||||
| 
 | ||||
| 	if (cmd <= MAX_HCALL_OPCODE && | ||||
| 	    !test_bit(cmd/4, vcpu->kvm->arch.enabled_hcalls)) | ||||
| 		return EMULATE_FAIL; | ||||
| 
 | ||||
| 	switch (cmd) { | ||||
| 	case H_ENTER: | ||||
| 		return kvmppc_h_pr_enter(vcpu); | ||||
| @ -294,8 +301,11 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) | ||||
| 		break; | ||||
| 	case H_RTAS: | ||||
| 		if (list_empty(&vcpu->kvm->arch.rtas_tokens)) | ||||
| 			return RESUME_HOST; | ||||
| 		if (kvmppc_rtas_hcall(vcpu)) | ||||
| 			break; | ||||
| 		idx = srcu_read_lock(&vcpu->kvm->srcu); | ||||
| 		rc = kvmppc_rtas_hcall(vcpu); | ||||
| 		srcu_read_unlock(&vcpu->kvm->srcu, idx); | ||||
| 		if (rc) | ||||
| 			break; | ||||
| 		kvmppc_set_gpr(vcpu, 3, 0); | ||||
| 		return EMULATE_DONE; | ||||
| @ -303,3 +313,61 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) | ||||
| 
 | ||||
| 	return EMULATE_FAIL; | ||||
| } | ||||
| 
 | ||||
| int kvmppc_hcall_impl_pr(unsigned long cmd) | ||||
| { | ||||
| 	switch (cmd) { | ||||
| 	case H_ENTER: | ||||
| 	case H_REMOVE: | ||||
| 	case H_PROTECT: | ||||
| 	case H_BULK_REMOVE: | ||||
| 	case H_PUT_TCE: | ||||
| 	case H_CEDE: | ||||
| #ifdef CONFIG_KVM_XICS | ||||
| 	case H_XIRR: | ||||
| 	case H_CPPR: | ||||
| 	case H_EOI: | ||||
| 	case H_IPI: | ||||
| 	case H_IPOLL: | ||||
| 	case H_XIRR_X: | ||||
| #endif | ||||
| 		return 1; | ||||
| 	} | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| /*
 | ||||
|  * List of hcall numbers to enable by default. | ||||
|  * For compatibility with old userspace, we enable by default | ||||
|  * all hcalls that were implemented before the hcall-enabling | ||||
|  * facility was added.  Note this list should not include H_RTAS. | ||||
|  */ | ||||
| static unsigned int default_hcall_list[] = { | ||||
| 	H_ENTER, | ||||
| 	H_REMOVE, | ||||
| 	H_PROTECT, | ||||
| 	H_BULK_REMOVE, | ||||
| 	H_PUT_TCE, | ||||
| 	H_CEDE, | ||||
| #ifdef CONFIG_KVM_XICS | ||||
| 	H_XIRR, | ||||
| 	H_CPPR, | ||||
| 	H_EOI, | ||||
| 	H_IPI, | ||||
| 	H_IPOLL, | ||||
| 	H_XIRR_X, | ||||
| #endif | ||||
| 	0 | ||||
| }; | ||||
| 
 | ||||
| void kvmppc_pr_init_default_hcalls(struct kvm *kvm) | ||||
| { | ||||
| 	int i; | ||||
| 	unsigned int hcall; | ||||
| 
 | ||||
| 	for (i = 0; default_hcall_list[i]; ++i) { | ||||
| 		hcall = default_hcall_list[i]; | ||||
| 		WARN_ON(!kvmppc_hcall_impl_pr(hcall)); | ||||
| 		__set_bit(hcall / 4, kvm->arch.enabled_hcalls); | ||||
| 	} | ||||
| } | ||||
|  | ||||
| @ -51,7 +51,6 @@ unsigned long kvmppc_booke_handlers; | ||||
| 
 | ||||
| struct kvm_stats_debugfs_item debugfs_entries[] = { | ||||
| 	{ "mmio",       VCPU_STAT(mmio_exits) }, | ||||
| 	{ "dcr",        VCPU_STAT(dcr_exits) }, | ||||
| 	{ "sig",        VCPU_STAT(signal_exits) }, | ||||
| 	{ "itlb_r",     VCPU_STAT(itlb_real_miss_exits) }, | ||||
| 	{ "itlb_v",     VCPU_STAT(itlb_virt_miss_exits) }, | ||||
| @ -185,24 +184,28 @@ static void kvmppc_booke_queue_irqprio(struct kvm_vcpu *vcpu, | ||||
| 	set_bit(priority, &vcpu->arch.pending_exceptions); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu, | ||||
|                                         ulong dear_flags, ulong esr_flags) | ||||
| void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu, | ||||
| 				 ulong dear_flags, ulong esr_flags) | ||||
| { | ||||
| 	vcpu->arch.queued_dear = dear_flags; | ||||
| 	vcpu->arch.queued_esr = esr_flags; | ||||
| 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DTLB_MISS); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, | ||||
|                                            ulong dear_flags, ulong esr_flags) | ||||
| void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, | ||||
| 				    ulong dear_flags, ulong esr_flags) | ||||
| { | ||||
| 	vcpu->arch.queued_dear = dear_flags; | ||||
| 	vcpu->arch.queued_esr = esr_flags; | ||||
| 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DATA_STORAGE); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, | ||||
|                                            ulong esr_flags) | ||||
| void kvmppc_core_queue_itlb_miss(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_ITLB_MISS); | ||||
| } | ||||
| 
 | ||||
| void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong esr_flags) | ||||
| { | ||||
| 	vcpu->arch.queued_esr = esr_flags; | ||||
| 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_INST_STORAGE); | ||||
| @ -266,13 +269,8 @@ static void kvmppc_core_dequeue_watchdog(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| static void set_guest_srr(struct kvm_vcpu *vcpu, unsigned long srr0, u32 srr1) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	mtspr(SPRN_GSRR0, srr0); | ||||
| 	mtspr(SPRN_GSRR1, srr1); | ||||
| #else | ||||
| 	vcpu->arch.shared->srr0 = srr0; | ||||
| 	vcpu->arch.shared->srr1 = srr1; | ||||
| #endif | ||||
| 	kvmppc_set_srr0(vcpu, srr0); | ||||
| 	kvmppc_set_srr1(vcpu, srr1); | ||||
| } | ||||
| 
 | ||||
| static void set_guest_csrr(struct kvm_vcpu *vcpu, unsigned long srr0, u32 srr1) | ||||
| @ -297,51 +295,6 @@ static void set_guest_mcsrr(struct kvm_vcpu *vcpu, unsigned long srr0, u32 srr1) | ||||
| 	vcpu->arch.mcsrr1 = srr1; | ||||
| } | ||||
| 
 | ||||
| static unsigned long get_guest_dear(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	return mfspr(SPRN_GDEAR); | ||||
| #else | ||||
| 	return vcpu->arch.shared->dar; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static void set_guest_dear(struct kvm_vcpu *vcpu, unsigned long dear) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	mtspr(SPRN_GDEAR, dear); | ||||
| #else | ||||
| 	vcpu->arch.shared->dar = dear; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static unsigned long get_guest_esr(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	return mfspr(SPRN_GESR); | ||||
| #else | ||||
| 	return vcpu->arch.shared->esr; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static void set_guest_esr(struct kvm_vcpu *vcpu, u32 esr) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	mtspr(SPRN_GESR, esr); | ||||
| #else | ||||
| 	vcpu->arch.shared->esr = esr; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static unsigned long get_guest_epr(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| 	return mfspr(SPRN_GEPR); | ||||
| #else | ||||
| 	return vcpu->arch.epr; | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| /* Deliver the interrupt of the corresponding priority, if possible. */ | ||||
| static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, | ||||
|                                         unsigned int priority) | ||||
| @ -450,9 +403,9 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, | ||||
| 
 | ||||
| 		vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority]; | ||||
| 		if (update_esr == true) | ||||
| 			set_guest_esr(vcpu, vcpu->arch.queued_esr); | ||||
| 			kvmppc_set_esr(vcpu, vcpu->arch.queued_esr); | ||||
| 		if (update_dear == true) | ||||
| 			set_guest_dear(vcpu, vcpu->arch.queued_dear); | ||||
| 			kvmppc_set_dar(vcpu, vcpu->arch.queued_dear); | ||||
| 		if (update_epr == true) { | ||||
| 			if (vcpu->arch.epr_flags & KVMPPC_EPR_USER) | ||||
| 				kvm_make_request(KVM_REQ_EPR_EXIT, vcpu); | ||||
| @ -752,9 +705,8 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| 		 * they were actually modified by emulation. */ | ||||
| 		return RESUME_GUEST_NV; | ||||
| 
 | ||||
| 	case EMULATE_DO_DCR: | ||||
| 		run->exit_reason = KVM_EXIT_DCR; | ||||
| 		return RESUME_HOST; | ||||
| 	case EMULATE_AGAIN: | ||||
| 		return RESUME_GUEST; | ||||
| 
 | ||||
| 	case EMULATE_FAIL: | ||||
| 		printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n", | ||||
| @ -866,6 +818,28 @@ static void kvmppc_restart_interrupt(struct kvm_vcpu *vcpu, | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 				  enum emulation_result emulated, u32 last_inst) | ||||
| { | ||||
| 	switch (emulated) { | ||||
| 	case EMULATE_AGAIN: | ||||
| 		return RESUME_GUEST; | ||||
| 
 | ||||
| 	case EMULATE_FAIL: | ||||
| 		pr_debug("%s: load instruction from guest address %lx failed\n", | ||||
| 		       __func__, vcpu->arch.pc); | ||||
| 		/* For debugging, encode the failing instruction and
 | ||||
| 		 * report it to userspace. */ | ||||
| 		run->hw.hardware_exit_reason = ~0ULL << 32; | ||||
| 		run->hw.hardware_exit_reason |= last_inst; | ||||
| 		kvmppc_core_queue_program(vcpu, ESR_PIL); | ||||
| 		return RESUME_HOST; | ||||
| 
 | ||||
| 	default: | ||||
| 		BUG(); | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| /**
 | ||||
|  * kvmppc_handle_exit | ||||
|  * | ||||
| @ -877,6 +851,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	int r = RESUME_HOST; | ||||
| 	int s; | ||||
| 	int idx; | ||||
| 	u32 last_inst = KVM_INST_FETCH_FAILED; | ||||
| 	enum emulation_result emulated = EMULATE_DONE; | ||||
| 
 | ||||
| 	/* update before a new last_exit_type is rewritten */ | ||||
| 	kvmppc_update_timing_stats(vcpu); | ||||
| @ -884,6 +860,20 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	/* restart interrupts if they were meant for the host */ | ||||
| 	kvmppc_restart_interrupt(vcpu, exit_nr); | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * get last instruction before beeing preempted | ||||
| 	 * TODO: for e6500 check also BOOKE_INTERRUPT_LRAT_ERROR & ESR_DATA | ||||
| 	 */ | ||||
| 	switch (exit_nr) { | ||||
| 	case BOOKE_INTERRUPT_DATA_STORAGE: | ||||
| 	case BOOKE_INTERRUPT_DTLB_MISS: | ||||
| 	case BOOKE_INTERRUPT_HV_PRIV: | ||||
| 		emulated = kvmppc_get_last_inst(vcpu, false, &last_inst); | ||||
| 		break; | ||||
| 	default: | ||||
| 		break; | ||||
| 	} | ||||
| 
 | ||||
| 	local_irq_enable(); | ||||
| 
 | ||||
| 	trace_kvm_exit(exit_nr, vcpu); | ||||
| @ -892,6 +882,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 	run->exit_reason = KVM_EXIT_UNKNOWN; | ||||
| 	run->ready_for_interrupt_injection = 1; | ||||
| 
 | ||||
| 	if (emulated != EMULATE_DONE) { | ||||
| 		r = kvmppc_resume_inst_load(run, vcpu, emulated, last_inst); | ||||
| 		goto out; | ||||
| 	} | ||||
| 
 | ||||
| 	switch (exit_nr) { | ||||
| 	case BOOKE_INTERRUPT_MACHINE_CHECK: | ||||
| 		printk("MACHINE CHECK: %lx\n", mfspr(SPRN_MCSR)); | ||||
| @ -1181,6 +1176,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 		BUG(); | ||||
| 	} | ||||
| 
 | ||||
| out: | ||||
| 	/*
 | ||||
| 	 * To avoid clobbering exit_reason, only check for signals if we | ||||
| 	 * aren't already exiting to userspace for some other reason. | ||||
| @ -1265,17 +1261,17 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) | ||||
| 	regs->lr = vcpu->arch.lr; | ||||
| 	regs->xer = kvmppc_get_xer(vcpu); | ||||
| 	regs->msr = vcpu->arch.shared->msr; | ||||
| 	regs->srr0 = vcpu->arch.shared->srr0; | ||||
| 	regs->srr1 = vcpu->arch.shared->srr1; | ||||
| 	regs->srr0 = kvmppc_get_srr0(vcpu); | ||||
| 	regs->srr1 = kvmppc_get_srr1(vcpu); | ||||
| 	regs->pid = vcpu->arch.pid; | ||||
| 	regs->sprg0 = vcpu->arch.shared->sprg0; | ||||
| 	regs->sprg1 = vcpu->arch.shared->sprg1; | ||||
| 	regs->sprg2 = vcpu->arch.shared->sprg2; | ||||
| 	regs->sprg3 = vcpu->arch.shared->sprg3; | ||||
| 	regs->sprg4 = vcpu->arch.shared->sprg4; | ||||
| 	regs->sprg5 = vcpu->arch.shared->sprg5; | ||||
| 	regs->sprg6 = vcpu->arch.shared->sprg6; | ||||
| 	regs->sprg7 = vcpu->arch.shared->sprg7; | ||||
| 	regs->sprg0 = kvmppc_get_sprg0(vcpu); | ||||
| 	regs->sprg1 = kvmppc_get_sprg1(vcpu); | ||||
| 	regs->sprg2 = kvmppc_get_sprg2(vcpu); | ||||
| 	regs->sprg3 = kvmppc_get_sprg3(vcpu); | ||||
| 	regs->sprg4 = kvmppc_get_sprg4(vcpu); | ||||
| 	regs->sprg5 = kvmppc_get_sprg5(vcpu); | ||||
| 	regs->sprg6 = kvmppc_get_sprg6(vcpu); | ||||
| 	regs->sprg7 = kvmppc_get_sprg7(vcpu); | ||||
| 
 | ||||
| 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) | ||||
| 		regs->gpr[i] = kvmppc_get_gpr(vcpu, i); | ||||
| @ -1293,17 +1289,17 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) | ||||
| 	vcpu->arch.lr = regs->lr; | ||||
| 	kvmppc_set_xer(vcpu, regs->xer); | ||||
| 	kvmppc_set_msr(vcpu, regs->msr); | ||||
| 	vcpu->arch.shared->srr0 = regs->srr0; | ||||
| 	vcpu->arch.shared->srr1 = regs->srr1; | ||||
| 	kvmppc_set_srr0(vcpu, regs->srr0); | ||||
| 	kvmppc_set_srr1(vcpu, regs->srr1); | ||||
| 	kvmppc_set_pid(vcpu, regs->pid); | ||||
| 	vcpu->arch.shared->sprg0 = regs->sprg0; | ||||
| 	vcpu->arch.shared->sprg1 = regs->sprg1; | ||||
| 	vcpu->arch.shared->sprg2 = regs->sprg2; | ||||
| 	vcpu->arch.shared->sprg3 = regs->sprg3; | ||||
| 	vcpu->arch.shared->sprg4 = regs->sprg4; | ||||
| 	vcpu->arch.shared->sprg5 = regs->sprg5; | ||||
| 	vcpu->arch.shared->sprg6 = regs->sprg6; | ||||
| 	vcpu->arch.shared->sprg7 = regs->sprg7; | ||||
| 	kvmppc_set_sprg0(vcpu, regs->sprg0); | ||||
| 	kvmppc_set_sprg1(vcpu, regs->sprg1); | ||||
| 	kvmppc_set_sprg2(vcpu, regs->sprg2); | ||||
| 	kvmppc_set_sprg3(vcpu, regs->sprg3); | ||||
| 	kvmppc_set_sprg4(vcpu, regs->sprg4); | ||||
| 	kvmppc_set_sprg5(vcpu, regs->sprg5); | ||||
| 	kvmppc_set_sprg6(vcpu, regs->sprg6); | ||||
| 	kvmppc_set_sprg7(vcpu, regs->sprg7); | ||||
| 
 | ||||
| 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) | ||||
| 		kvmppc_set_gpr(vcpu, i, regs->gpr[i]); | ||||
| @ -1321,8 +1317,8 @@ static void get_sregs_base(struct kvm_vcpu *vcpu, | ||||
| 	sregs->u.e.csrr0 = vcpu->arch.csrr0; | ||||
| 	sregs->u.e.csrr1 = vcpu->arch.csrr1; | ||||
| 	sregs->u.e.mcsr = vcpu->arch.mcsr; | ||||
| 	sregs->u.e.esr = get_guest_esr(vcpu); | ||||
| 	sregs->u.e.dear = get_guest_dear(vcpu); | ||||
| 	sregs->u.e.esr = kvmppc_get_esr(vcpu); | ||||
| 	sregs->u.e.dear = kvmppc_get_dar(vcpu); | ||||
| 	sregs->u.e.tsr = vcpu->arch.tsr; | ||||
| 	sregs->u.e.tcr = vcpu->arch.tcr; | ||||
| 	sregs->u.e.dec = kvmppc_get_dec(vcpu, tb); | ||||
| @ -1339,8 +1335,8 @@ static int set_sregs_base(struct kvm_vcpu *vcpu, | ||||
| 	vcpu->arch.csrr0 = sregs->u.e.csrr0; | ||||
| 	vcpu->arch.csrr1 = sregs->u.e.csrr1; | ||||
| 	vcpu->arch.mcsr = sregs->u.e.mcsr; | ||||
| 	set_guest_esr(vcpu, sregs->u.e.esr); | ||||
| 	set_guest_dear(vcpu, sregs->u.e.dear); | ||||
| 	kvmppc_set_esr(vcpu, sregs->u.e.esr); | ||||
| 	kvmppc_set_dar(vcpu, sregs->u.e.dear); | ||||
| 	vcpu->arch.vrsave = sregs->u.e.vrsave; | ||||
| 	kvmppc_set_tcr(vcpu, sregs->u.e.tcr); | ||||
| 
 | ||||
| @ -1493,7 +1489,7 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg) | ||||
| 		val = get_reg_val(reg->id, vcpu->arch.dbg_reg.dac2); | ||||
| 		break; | ||||
| 	case KVM_REG_PPC_EPR: { | ||||
| 		u32 epr = get_guest_epr(vcpu); | ||||
| 		u32 epr = kvmppc_get_epr(vcpu); | ||||
| 		val = get_reg_val(reg->id, epr); | ||||
| 		break; | ||||
| 	} | ||||
| @ -1788,6 +1784,57 @@ void kvm_guest_protect_msr(struct kvm_vcpu *vcpu, ulong prot_bitmap, bool set) | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid, | ||||
| 		 enum xlate_readwrite xlrw, struct kvmppc_pte *pte) | ||||
| { | ||||
| 	int gtlb_index; | ||||
| 	gpa_t gpaddr; | ||||
| 
 | ||||
| #ifdef CONFIG_KVM_E500V2 | ||||
| 	if (!(vcpu->arch.shared->msr & MSR_PR) && | ||||
| 	    (eaddr & PAGE_MASK) == vcpu->arch.magic_page_ea) { | ||||
| 		pte->eaddr = eaddr; | ||||
| 		pte->raddr = (vcpu->arch.magic_page_pa & PAGE_MASK) | | ||||
| 			     (eaddr & ~PAGE_MASK); | ||||
| 		pte->vpage = eaddr >> PAGE_SHIFT; | ||||
| 		pte->may_read = true; | ||||
| 		pte->may_write = true; | ||||
| 		pte->may_execute = true; | ||||
| 
 | ||||
| 		return 0; | ||||
| 	} | ||||
| #endif | ||||
| 
 | ||||
| 	/* Check the guest TLB. */ | ||||
| 	switch (xlid) { | ||||
| 	case XLATE_INST: | ||||
| 		gtlb_index = kvmppc_mmu_itlb_index(vcpu, eaddr); | ||||
| 		break; | ||||
| 	case XLATE_DATA: | ||||
| 		gtlb_index = kvmppc_mmu_dtlb_index(vcpu, eaddr); | ||||
| 		break; | ||||
| 	default: | ||||
| 		BUG(); | ||||
| 	} | ||||
| 
 | ||||
| 	/* Do we have a TLB entry at all? */ | ||||
| 	if (gtlb_index < 0) | ||||
| 		return -ENOENT; | ||||
| 
 | ||||
| 	gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr); | ||||
| 
 | ||||
| 	pte->eaddr = eaddr; | ||||
| 	pte->raddr = (gpaddr & PAGE_MASK) | (eaddr & ~PAGE_MASK); | ||||
| 	pte->vpage = eaddr >> PAGE_SHIFT; | ||||
| 
 | ||||
| 	/* XXX read permissions from the guest TLB */ | ||||
| 	pte->may_read = true; | ||||
| 	pte->may_write = true; | ||||
| 	pte->may_execute = true; | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, | ||||
| 					 struct kvm_guest_debug *dbg) | ||||
| { | ||||
|  | ||||
| @ -99,13 +99,6 @@ enum int_class { | ||||
| 
 | ||||
| void kvmppc_set_pending_interrupt(struct kvm_vcpu *vcpu, enum int_class type); | ||||
| 
 | ||||
| extern void kvmppc_mmu_destroy_44x(struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_core_emulate_op_44x(struct kvm_run *run, struct kvm_vcpu *vcpu, | ||||
| 				      unsigned int inst, int *advance); | ||||
| extern int kvmppc_core_emulate_mtspr_44x(struct kvm_vcpu *vcpu, int sprn, | ||||
| 					 ulong spr_val); | ||||
| extern int kvmppc_core_emulate_mfspr_44x(struct kvm_vcpu *vcpu, int sprn, | ||||
| 					 ulong *spr_val); | ||||
| extern void kvmppc_mmu_destroy_e500(struct kvm_vcpu *vcpu); | ||||
| extern int kvmppc_core_emulate_op_e500(struct kvm_run *run, | ||||
| 				       struct kvm_vcpu *vcpu, | ||||
|  | ||||
| @ -165,16 +165,16 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val) | ||||
| 	 * guest (PR-mode only). | ||||
| 	 */ | ||||
| 	case SPRN_SPRG4: | ||||
| 		vcpu->arch.shared->sprg4 = spr_val; | ||||
| 		kvmppc_set_sprg4(vcpu, spr_val); | ||||
| 		break; | ||||
| 	case SPRN_SPRG5: | ||||
| 		vcpu->arch.shared->sprg5 = spr_val; | ||||
| 		kvmppc_set_sprg5(vcpu, spr_val); | ||||
| 		break; | ||||
| 	case SPRN_SPRG6: | ||||
| 		vcpu->arch.shared->sprg6 = spr_val; | ||||
| 		kvmppc_set_sprg6(vcpu, spr_val); | ||||
| 		break; | ||||
| 	case SPRN_SPRG7: | ||||
| 		vcpu->arch.shared->sprg7 = spr_val; | ||||
| 		kvmppc_set_sprg7(vcpu, spr_val); | ||||
| 		break; | ||||
| 
 | ||||
| 	case SPRN_IVPR: | ||||
|  | ||||
| @ -21,7 +21,6 @@ | ||||
| #include <asm/ppc_asm.h> | ||||
| #include <asm/kvm_asm.h> | ||||
| #include <asm/reg.h> | ||||
| #include <asm/mmu-44x.h> | ||||
| #include <asm/page.h> | ||||
| #include <asm/asm-offsets.h> | ||||
| 
 | ||||
| @ -424,10 +423,6 @@ lightweight_exit: | ||||
| 	mtspr	SPRN_PID1, r3 | ||||
| #endif | ||||
| 
 | ||||
| #ifdef CONFIG_44x | ||||
| 	iccci	0, 0 /* XXX hack */ | ||||
| #endif | ||||
| 
 | ||||
| 	/* Load some guest volatiles. */ | ||||
| 	lwz	r0, VCPU_GPR(R0)(r4) | ||||
| 	lwz	r2, VCPU_GPR(R2)(r4) | ||||
|  | ||||
| @ -24,12 +24,10 @@ | ||||
| #include <asm/ppc_asm.h> | ||||
| #include <asm/kvm_asm.h> | ||||
| #include <asm/reg.h> | ||||
| #include <asm/mmu-44x.h> | ||||
| #include <asm/page.h> | ||||
| #include <asm/asm-compat.h> | ||||
| #include <asm/asm-offsets.h> | ||||
| #include <asm/bitsperlong.h> | ||||
| #include <asm/thread_info.h> | ||||
| 
 | ||||
| #ifdef CONFIG_64BIT | ||||
| #include <asm/exception-64e.h> | ||||
| @ -122,38 +120,14 @@ | ||||
| 1: | ||||
| 
 | ||||
| 	.if	\flags & NEED_EMU | ||||
| 	/* | ||||
| 	 * This assumes you have external PID support. | ||||
| 	 * To support a bookehv CPU without external PID, you'll | ||||
| 	 * need to look up the TLB entry and create a temporary mapping. | ||||
| 	 * | ||||
| 	 * FIXME: we don't currently handle if the lwepx faults.  PR-mode | ||||
| 	 * booke doesn't handle it either.  Since Linux doesn't use | ||||
| 	 * broadcast tlbivax anymore, the only way this should happen is | ||||
| 	 * if the guest maps its memory execute-but-not-read, or if we | ||||
| 	 * somehow take a TLB miss in the middle of this entry code and | ||||
| 	 * evict the relevant entry.  On e500mc, all kernel lowmem is | ||||
| 	 * bolted into TLB1 large page mappings, and we don't use | ||||
| 	 * broadcast invalidates, so we should not take a TLB miss here. | ||||
| 	 * | ||||
| 	 * Later we'll need to deal with faults here.  Disallowing guest | ||||
| 	 * mappings that are execute-but-not-read could be an option on | ||||
| 	 * e500mc, but not on chips with an LRAT if it is used. | ||||
| 	 */ | ||||
| 
 | ||||
| 	mfspr	r3, SPRN_EPLC	/* will already have correct ELPID and EGS */ | ||||
| 	PPC_STL	r15, VCPU_GPR(R15)(r4) | ||||
| 	PPC_STL	r16, VCPU_GPR(R16)(r4) | ||||
| 	PPC_STL	r17, VCPU_GPR(R17)(r4) | ||||
| 	PPC_STL	r18, VCPU_GPR(R18)(r4) | ||||
| 	PPC_STL	r19, VCPU_GPR(R19)(r4) | ||||
| 	mr	r8, r3 | ||||
| 	PPC_STL	r20, VCPU_GPR(R20)(r4) | ||||
| 	rlwimi	r8, r6, EPC_EAS_SHIFT - MSR_IR_LG, EPC_EAS | ||||
| 	PPC_STL	r21, VCPU_GPR(R21)(r4) | ||||
| 	rlwimi	r8, r6, EPC_EPR_SHIFT - MSR_PR_LG, EPC_EPR | ||||
| 	PPC_STL	r22, VCPU_GPR(R22)(r4) | ||||
| 	rlwimi	r8, r10, EPC_EPID_SHIFT, EPC_EPID | ||||
| 	PPC_STL	r23, VCPU_GPR(R23)(r4) | ||||
| 	PPC_STL	r24, VCPU_GPR(R24)(r4) | ||||
| 	PPC_STL	r25, VCPU_GPR(R25)(r4) | ||||
| @ -163,33 +137,15 @@ | ||||
| 	PPC_STL	r29, VCPU_GPR(R29)(r4) | ||||
| 	PPC_STL	r30, VCPU_GPR(R30)(r4) | ||||
| 	PPC_STL	r31, VCPU_GPR(R31)(r4) | ||||
| 	mtspr	SPRN_EPLC, r8 | ||||
| 
 | ||||
| 	/* disable preemption, so we are sure we hit the fixup handler */ | ||||
| 	CURRENT_THREAD_INFO(r8, r1) | ||||
| 	li	r7, 1 | ||||
| 	stw	r7, TI_PREEMPT(r8) | ||||
| 
 | ||||
| 	isync | ||||
| 
 | ||||
| 	/* | ||||
| 	 * In case the read goes wrong, we catch it and write an invalid value | ||||
| 	 * in LAST_INST instead. | ||||
| 	 * We don't use external PID support. lwepx faults would need to be | ||||
| 	 * handled by KVM and this implies aditional code in DO_KVM (for | ||||
| 	 * DTB_MISS, DSI and LRAT) to check ESR[EPID] and EPLC[EGS] which | ||||
| 	 * is too intrusive for the host. Get last instuction in | ||||
| 	 * kvmppc_get_last_inst(). | ||||
| 	 */ | ||||
| 1:	lwepx	r9, 0, r5 | ||||
| 2: | ||||
| .section .fixup, "ax" | ||||
| 3:	li	r9, KVM_INST_FETCH_FAILED | ||||
| 	b	2b | ||||
| .previous | ||||
| .section __ex_table,"a" | ||||
| 	PPC_LONG_ALIGN | ||||
| 	PPC_LONG 1b,3b | ||||
| .previous | ||||
| 
 | ||||
| 	mtspr	SPRN_EPLC, r3 | ||||
| 	li	r7, 0 | ||||
| 	stw	r7, TI_PREEMPT(r8) | ||||
| 	li	r9, KVM_INST_FETCH_FAILED | ||||
| 	stw	r9, VCPU_LAST_INST(r4) | ||||
| 	.endif | ||||
| 
 | ||||
| @ -441,6 +397,7 @@ _GLOBAL(kvmppc_resume_host) | ||||
| #ifdef CONFIG_64BIT | ||||
| 	PPC_LL	r3, PACA_SPRG_VDSO(r13) | ||||
| #endif | ||||
| 	mfspr	r5, SPRN_SPRG9 | ||||
| 	PPC_STD(r6, VCPU_SHARED_SPRG4, r11) | ||||
| 	mfspr	r8, SPRN_SPRG6 | ||||
| 	PPC_STD(r7, VCPU_SHARED_SPRG5, r11) | ||||
| @ -448,6 +405,7 @@ _GLOBAL(kvmppc_resume_host) | ||||
| #ifdef CONFIG_64BIT | ||||
| 	mtspr	SPRN_SPRG_VDSO_WRITE, r3 | ||||
| #endif | ||||
| 	PPC_STD(r5, VCPU_SPRG9, r4) | ||||
| 	PPC_STD(r8, VCPU_SHARED_SPRG6, r11) | ||||
| 	mfxer	r3 | ||||
| 	PPC_STD(r9, VCPU_SHARED_SPRG7, r11) | ||||
| @ -682,7 +640,9 @@ lightweight_exit: | ||||
| 	mtspr	SPRN_SPRG5W, r6 | ||||
| 	PPC_LD(r8, VCPU_SHARED_SPRG7, r11) | ||||
| 	mtspr	SPRN_SPRG6W, r7 | ||||
| 	PPC_LD(r5, VCPU_SPRG9, r4) | ||||
| 	mtspr	SPRN_SPRG7W, r8 | ||||
| 	mtspr	SPRN_SPRG9, r5 | ||||
| 
 | ||||
| 	/* Load some guest volatiles. */ | ||||
| 	PPC_LL	r3, VCPU_LR(r4) | ||||
|  | ||||
| @ -250,6 +250,14 @@ int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong spr_va | ||||
| 				spr_val); | ||||
| 		break; | ||||
| 
 | ||||
| 	case SPRN_PWRMGTCR0: | ||||
| 		/*
 | ||||
| 		 * Guest relies on host power management configurations | ||||
| 		 * Treat the request as a general store | ||||
| 		 */ | ||||
| 		vcpu->arch.pwrmgtcr0 = spr_val; | ||||
| 		break; | ||||
| 
 | ||||
| 	/* extra exceptions */ | ||||
| 	case SPRN_IVOR32: | ||||
| 		vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL] = spr_val; | ||||
| @ -368,6 +376,10 @@ int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong *spr_v | ||||
| 		*spr_val = vcpu->arch.eptcfg; | ||||
| 		break; | ||||
| 
 | ||||
| 	case SPRN_PWRMGTCR0: | ||||
| 		*spr_val = vcpu->arch.pwrmgtcr0; | ||||
| 		break; | ||||
| 
 | ||||
| 	/* extra exceptions */ | ||||
| 	case SPRN_IVOR32: | ||||
| 		*spr_val = vcpu->arch.ivor[BOOKE_IRQPRIO_SPE_UNAVAIL]; | ||||
|  | ||||
| @ -107,11 +107,15 @@ static u32 get_host_mas0(unsigned long eaddr) | ||||
| { | ||||
| 	unsigned long flags; | ||||
| 	u32 mas0; | ||||
| 	u32 mas4; | ||||
| 
 | ||||
| 	local_irq_save(flags); | ||||
| 	mtspr(SPRN_MAS6, 0); | ||||
| 	mas4 = mfspr(SPRN_MAS4); | ||||
| 	mtspr(SPRN_MAS4, mas4 & ~MAS4_TLBSEL_MASK); | ||||
| 	asm volatile("tlbsx 0, %0" : : "b" (eaddr & ~CONFIG_PAGE_OFFSET)); | ||||
| 	mas0 = mfspr(SPRN_MAS0); | ||||
| 	mtspr(SPRN_MAS4, mas4); | ||||
| 	local_irq_restore(flags); | ||||
| 
 | ||||
| 	return mas0; | ||||
| @ -607,6 +611,104 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| #ifdef CONFIG_KVM_BOOKE_HV | ||||
| int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, | ||||
| 			  u32 *instr) | ||||
| { | ||||
| 	gva_t geaddr; | ||||
| 	hpa_t addr; | ||||
| 	hfn_t pfn; | ||||
| 	hva_t eaddr; | ||||
| 	u32 mas1, mas2, mas3; | ||||
| 	u64 mas7_mas3; | ||||
| 	struct page *page; | ||||
| 	unsigned int addr_space, psize_shift; | ||||
| 	bool pr; | ||||
| 	unsigned long flags; | ||||
| 
 | ||||
| 	/* Search TLB for guest pc to get the real address */ | ||||
| 	geaddr = kvmppc_get_pc(vcpu); | ||||
| 
 | ||||
| 	addr_space = (vcpu->arch.shared->msr & MSR_IS) >> MSR_IR_LG; | ||||
| 
 | ||||
| 	local_irq_save(flags); | ||||
| 	mtspr(SPRN_MAS6, (vcpu->arch.pid << MAS6_SPID_SHIFT) | addr_space); | ||||
| 	mtspr(SPRN_MAS5, MAS5_SGS | vcpu->kvm->arch.lpid); | ||||
| 	asm volatile("tlbsx 0, %[geaddr]\n" : : | ||||
| 		     [geaddr] "r" (geaddr)); | ||||
| 	mtspr(SPRN_MAS5, 0); | ||||
| 	mtspr(SPRN_MAS8, 0); | ||||
| 	mas1 = mfspr(SPRN_MAS1); | ||||
| 	mas2 = mfspr(SPRN_MAS2); | ||||
| 	mas3 = mfspr(SPRN_MAS3); | ||||
| #ifdef CONFIG_64BIT | ||||
| 	mas7_mas3 = mfspr(SPRN_MAS7_MAS3); | ||||
| #else | ||||
| 	mas7_mas3 = ((u64)mfspr(SPRN_MAS7) << 32) | mas3; | ||||
| #endif | ||||
| 	local_irq_restore(flags); | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * If the TLB entry for guest pc was evicted, return to the guest. | ||||
| 	 * There are high chances to find a valid TLB entry next time. | ||||
| 	 */ | ||||
| 	if (!(mas1 & MAS1_VALID)) | ||||
| 		return EMULATE_AGAIN; | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * Another thread may rewrite the TLB entry in parallel, don't | ||||
| 	 * execute from the address if the execute permission is not set | ||||
| 	 */ | ||||
| 	pr = vcpu->arch.shared->msr & MSR_PR; | ||||
| 	if (unlikely((pr && !(mas3 & MAS3_UX)) || | ||||
| 		     (!pr && !(mas3 & MAS3_SX)))) { | ||||
| 		pr_err_ratelimited( | ||||
| 			"%s: Instuction emulation from guest addres %08lx without execute permission\n", | ||||
| 			__func__, geaddr); | ||||
| 		return EMULATE_AGAIN; | ||||
| 	} | ||||
| 
 | ||||
| 	/*
 | ||||
| 	 * The real address will be mapped by a cacheable, memory coherent, | ||||
| 	 * write-back page. Check for mismatches when LRAT is used. | ||||
| 	 */ | ||||
| 	if (has_feature(vcpu, VCPU_FTR_MMU_V2) && | ||||
| 	    unlikely((mas2 & MAS2_I) || (mas2 & MAS2_W) || !(mas2 & MAS2_M))) { | ||||
| 		pr_err_ratelimited( | ||||
| 			"%s: Instuction emulation from guest addres %08lx mismatches storage attributes\n", | ||||
| 			__func__, geaddr); | ||||
| 		return EMULATE_AGAIN; | ||||
| 	} | ||||
| 
 | ||||
| 	/* Get pfn */ | ||||
| 	psize_shift = MAS1_GET_TSIZE(mas1) + 10; | ||||
| 	addr = (mas7_mas3 & (~0ULL << psize_shift)) | | ||||
| 	       (geaddr & ((1ULL << psize_shift) - 1ULL)); | ||||
| 	pfn = addr >> PAGE_SHIFT; | ||||
| 
 | ||||
| 	/* Guard against emulation from devices area */ | ||||
| 	if (unlikely(!page_is_ram(pfn))) { | ||||
| 		pr_err_ratelimited("%s: Instruction emulation from non-RAM host addres %08llx is not supported\n", | ||||
| 			 __func__, addr); | ||||
| 		return EMULATE_AGAIN; | ||||
| 	} | ||||
| 
 | ||||
| 	/* Map a page and get guest's instruction */ | ||||
| 	page = pfn_to_page(pfn); | ||||
| 	eaddr = (unsigned long)kmap_atomic(page); | ||||
| 	*instr = *(u32 *)(eaddr | (unsigned long)(addr & ~PAGE_MASK)); | ||||
| 	kunmap_atomic((u32 *)eaddr); | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| #else | ||||
| int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, | ||||
| 			  u32 *instr) | ||||
| { | ||||
| 	return EMULATE_AGAIN; | ||||
| } | ||||
| #endif | ||||
| 
 | ||||
| /************* MMU Notifiers *************/ | ||||
| 
 | ||||
| int kvm_unmap_hva(struct kvm *kvm, unsigned long hva) | ||||
|  | ||||
| @ -110,7 +110,7 @@ void kvmppc_mmu_msr_notify(struct kvm_vcpu *vcpu, u32 old_msr) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| static DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu_on_cpu); | ||||
| static DEFINE_PER_CPU(struct kvm_vcpu *[KVMPPC_NR_LPIDS], last_vcpu_of_lpid); | ||||
| 
 | ||||
| static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu) | ||||
| { | ||||
| @ -141,9 +141,9 @@ static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu) | ||||
| 	mtspr(SPRN_GESR, vcpu->arch.shared->esr); | ||||
| 
 | ||||
| 	if (vcpu->arch.oldpir != mfspr(SPRN_PIR) || | ||||
| 	    __get_cpu_var(last_vcpu_on_cpu) != vcpu) { | ||||
| 	    __get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] != vcpu) { | ||||
| 		kvmppc_e500_tlbil_all(vcpu_e500); | ||||
| 		__get_cpu_var(last_vcpu_on_cpu) = vcpu; | ||||
| 		__get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] = vcpu; | ||||
| 	} | ||||
| 
 | ||||
| 	kvmppc_load_guest_fp(vcpu); | ||||
| @ -267,14 +267,32 @@ static int kvmppc_core_set_sregs_e500mc(struct kvm_vcpu *vcpu, | ||||
| static int kvmppc_get_one_reg_e500mc(struct kvm_vcpu *vcpu, u64 id, | ||||
| 			      union kvmppc_one_reg *val) | ||||
| { | ||||
| 	int r = kvmppc_get_one_reg_e500_tlb(vcpu, id, val); | ||||
| 	int r = 0; | ||||
| 
 | ||||
| 	switch (id) { | ||||
| 	case KVM_REG_PPC_SPRG9: | ||||
| 		*val = get_reg_val(id, vcpu->arch.sprg9); | ||||
| 		break; | ||||
| 	default: | ||||
| 		r = kvmppc_get_one_reg_e500_tlb(vcpu, id, val); | ||||
| 	} | ||||
| 
 | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| static int kvmppc_set_one_reg_e500mc(struct kvm_vcpu *vcpu, u64 id, | ||||
| 			      union kvmppc_one_reg *val) | ||||
| { | ||||
| 	int r = kvmppc_set_one_reg_e500_tlb(vcpu, id, val); | ||||
| 	int r = 0; | ||||
| 
 | ||||
| 	switch (id) { | ||||
| 	case KVM_REG_PPC_SPRG9: | ||||
| 		vcpu->arch.sprg9 = set_reg_val(id, *val); | ||||
| 		break; | ||||
| 	default: | ||||
| 		r = kvmppc_set_one_reg_e500_tlb(vcpu, id, val); | ||||
| 	} | ||||
| 
 | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
|  | ||||
| @ -207,36 +207,28 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt) | ||||
| 	return emulated; | ||||
| } | ||||
| 
 | ||||
| /* XXX to do:
 | ||||
|  * lhax | ||||
|  * lhaux | ||||
|  * lswx | ||||
|  * lswi | ||||
|  * stswx | ||||
|  * stswi | ||||
|  * lha | ||||
|  * lhau | ||||
|  * lmw | ||||
|  * stmw | ||||
|  * | ||||
|  */ | ||||
| /* XXX Should probably auto-generate instruction decoding for a particular core
 | ||||
|  * from opcode tables in the future. */ | ||||
| int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	u32 inst = kvmppc_get_last_inst(vcpu); | ||||
| 	int ra = get_ra(inst); | ||||
| 	int rs = get_rs(inst); | ||||
| 	int rt = get_rt(inst); | ||||
| 	int sprn = get_sprn(inst); | ||||
| 	enum emulation_result emulated = EMULATE_DONE; | ||||
| 	u32 inst; | ||||
| 	int rs, rt, sprn; | ||||
| 	enum emulation_result emulated; | ||||
| 	int advance = 1; | ||||
| 
 | ||||
| 	/* this default type might be overwritten by subcategories */ | ||||
| 	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS); | ||||
| 
 | ||||
| 	emulated = kvmppc_get_last_inst(vcpu, false, &inst); | ||||
| 	if (emulated != EMULATE_DONE) | ||||
| 		return emulated; | ||||
| 
 | ||||
| 	pr_debug("Emulating opcode %d / %d\n", get_op(inst), get_xop(inst)); | ||||
| 
 | ||||
| 	rs = get_rs(inst); | ||||
| 	rt = get_rt(inst); | ||||
| 	sprn = get_sprn(inst); | ||||
| 
 | ||||
| 	switch (get_op(inst)) { | ||||
| 	case OP_TRAP: | ||||
| #ifdef CONFIG_PPC_BOOK3S | ||||
| @ -264,200 +256,24 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| #endif | ||||
| 			advance = 0; | ||||
| 			break; | ||||
| 		case OP_31_XOP_LWZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LBZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LBZUX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STWX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               4, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STBX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               1, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STBUX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               1, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHAX: | ||||
| 			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHZUX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_MFSPR: | ||||
| 			emulated = kvmppc_emulate_mfspr(vcpu, sprn, rt); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHUX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_MTSPR: | ||||
| 			emulated = kvmppc_emulate_mtspr(vcpu, sprn, rs); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_DCBST: | ||||
| 		case OP_31_XOP_DCBF: | ||||
| 		case OP_31_XOP_DCBI: | ||||
| 			/* Do nothing. The guest is performing dcbi because
 | ||||
| 			 * hardware DMA is not snooped by the dcache, but | ||||
| 			 * emulated DMA either goes through the dcache as | ||||
| 			 * normal writes, or the host kernel has handled dcache | ||||
| 			 * coherence. */ | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LWBRX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_TLBSYNC: | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STWBRX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               4, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHBRX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHBRX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		default: | ||||
| 			/* Attempt core-specific emulation below. */ | ||||
| 			emulated = EMULATE_FAIL; | ||||
| 		} | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LWZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	/* TBD: Add support for other 64 bit load variants like ldu, ldux, ldx etc. */ | ||||
| 	case OP_LD: | ||||
| 		rt = get_rt(inst); | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LWZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LBZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LBZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STW: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               4, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	/* TBD: Add support for other 64 bit store variants like stdu, stdux, stdx etc. */ | ||||
| 	case OP_STD: | ||||
| 		rs = get_rs(inst); | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               8, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STWU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               4, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STB: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               1, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STBU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               1, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHA: | ||||
| 		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHAU: | ||||
| 		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STH: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STHU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	default: | ||||
| 		emulated = EMULATE_FAIL; | ||||
| 	} | ||||
|  | ||||
							
								
								
									
										272
									
								
								arch/powerpc/kvm/emulate_loadstore.c
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										272
									
								
								arch/powerpc/kvm/emulate_loadstore.c
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,272 @@ | ||||
| /*
 | ||||
|  * This program is free software; you can redistribute it and/or modify | ||||
|  * it under the terms of the GNU General Public License, version 2, as | ||||
|  * published by the Free Software Foundation. | ||||
|  * | ||||
|  * This program is distributed in the hope that it will be useful, | ||||
|  * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||||
|  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the | ||||
|  * GNU General Public License for more details. | ||||
|  * | ||||
|  * You should have received a copy of the GNU General Public License | ||||
|  * along with this program; if not, write to the Free Software | ||||
|  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA. | ||||
|  * | ||||
|  * Copyright IBM Corp. 2007 | ||||
|  * Copyright 2011 Freescale Semiconductor, Inc. | ||||
|  * | ||||
|  * Authors: Hollis Blanchard <hollisb@us.ibm.com> | ||||
|  */ | ||||
| 
 | ||||
| #include <linux/jiffies.h> | ||||
| #include <linux/hrtimer.h> | ||||
| #include <linux/types.h> | ||||
| #include <linux/string.h> | ||||
| #include <linux/kvm_host.h> | ||||
| #include <linux/clockchips.h> | ||||
| 
 | ||||
| #include <asm/reg.h> | ||||
| #include <asm/time.h> | ||||
| #include <asm/byteorder.h> | ||||
| #include <asm/kvm_ppc.h> | ||||
| #include <asm/disassemble.h> | ||||
| #include <asm/ppc-opcode.h> | ||||
| #include "timing.h" | ||||
| #include "trace.h" | ||||
| 
 | ||||
| /* XXX to do:
 | ||||
|  * lhax | ||||
|  * lhaux | ||||
|  * lswx | ||||
|  * lswi | ||||
|  * stswx | ||||
|  * stswi | ||||
|  * lha | ||||
|  * lhau | ||||
|  * lmw | ||||
|  * stmw | ||||
|  * | ||||
|  */ | ||||
| int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) | ||||
| { | ||||
| 	struct kvm_run *run = vcpu->run; | ||||
| 	u32 inst; | ||||
| 	int ra, rs, rt; | ||||
| 	enum emulation_result emulated; | ||||
| 	int advance = 1; | ||||
| 
 | ||||
| 	/* this default type might be overwritten by subcategories */ | ||||
| 	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS); | ||||
| 
 | ||||
| 	emulated = kvmppc_get_last_inst(vcpu, false, &inst); | ||||
| 	if (emulated != EMULATE_DONE) | ||||
| 		return emulated; | ||||
| 
 | ||||
| 	ra = get_ra(inst); | ||||
| 	rs = get_rs(inst); | ||||
| 	rt = get_rt(inst); | ||||
| 
 | ||||
| 	switch (get_op(inst)) { | ||||
| 	case 31: | ||||
| 		switch (get_xop(inst)) { | ||||
| 		case OP_31_XOP_LWZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LBZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LBZUX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STWX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               4, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STBX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               1, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STBUX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               1, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHAX: | ||||
| 			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHZX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHZUX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 1); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHUX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 1); | ||||
| 			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_DCBST: | ||||
| 		case OP_31_XOP_DCBF: | ||||
| 		case OP_31_XOP_DCBI: | ||||
| 			/* Do nothing. The guest is performing dcbi because
 | ||||
| 			 * hardware DMA is not snooped by the dcache, but | ||||
| 			 * emulated DMA either goes through the dcache as | ||||
| 			 * normal writes, or the host kernel has handled dcache | ||||
| 			 * coherence. */ | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LWBRX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STWBRX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               4, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_LHBRX: | ||||
| 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		case OP_31_XOP_STHBRX: | ||||
| 			emulated = kvmppc_handle_store(run, vcpu, | ||||
| 						       kvmppc_get_gpr(vcpu, rs), | ||||
| 			                               2, 0); | ||||
| 			break; | ||||
| 
 | ||||
| 		default: | ||||
| 			emulated = EMULATE_FAIL; | ||||
| 			break; | ||||
| 		} | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LWZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	/* TBD: Add support for other 64 bit load variants like ldu, ldux, ldx etc. */ | ||||
| 	case OP_LD: | ||||
| 		rt = get_rt(inst); | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LWZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LBZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LBZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STW: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               4, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	/* TBD: Add support for other 64 bit store variants like stdu, stdux, stdx etc. */ | ||||
| 	case OP_STD: | ||||
| 		rs = get_rs(inst); | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               8, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STWU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               4, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STB: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               1, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STBU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               1, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHZ: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHZU: | ||||
| 		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHA: | ||||
| 		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_LHAU: | ||||
| 		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STH: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               2, 1); | ||||
| 		break; | ||||
| 
 | ||||
| 	case OP_STHU: | ||||
| 		emulated = kvmppc_handle_store(run, vcpu, | ||||
| 					       kvmppc_get_gpr(vcpu, rs), | ||||
| 		                               2, 1); | ||||
| 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); | ||||
| 		break; | ||||
| 
 | ||||
| 	default: | ||||
| 		emulated = EMULATE_FAIL; | ||||
| 		break; | ||||
| 	} | ||||
| 
 | ||||
| 	if (emulated == EMULATE_FAIL) { | ||||
| 		advance = 0; | ||||
| 		kvmppc_core_queue_program(vcpu, 0); | ||||
| 	} | ||||
| 
 | ||||
| 	trace_kvm_ppc_instr(inst, kvmppc_get_pc(vcpu), emulated); | ||||
| 
 | ||||
| 	/* Advance past emulated instruction. */ | ||||
| 	if (advance) | ||||
| 		kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) + 4); | ||||
| 
 | ||||
| 	return emulated; | ||||
| } | ||||
| @ -190,6 +190,25 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) | ||||
| 		vcpu->arch.magic_page_pa = param1 & ~0xfffULL; | ||||
| 		vcpu->arch.magic_page_ea = param2 & ~0xfffULL; | ||||
| 
 | ||||
| #ifdef CONFIG_PPC_64K_PAGES | ||||
| 		/*
 | ||||
| 		 * Make sure our 4k magic page is in the same window of a 64k | ||||
| 		 * page within the guest and within the host's page. | ||||
| 		 */ | ||||
| 		if ((vcpu->arch.magic_page_pa & 0xf000) != | ||||
| 		    ((ulong)vcpu->arch.shared & 0xf000)) { | ||||
| 			void *old_shared = vcpu->arch.shared; | ||||
| 			ulong shared = (ulong)vcpu->arch.shared; | ||||
| 			void *new_shared; | ||||
| 
 | ||||
| 			shared &= PAGE_MASK; | ||||
| 			shared |= vcpu->arch.magic_page_pa & 0xf000; | ||||
| 			new_shared = (void*)shared; | ||||
| 			memcpy(new_shared, old_shared, 0x1000); | ||||
| 			vcpu->arch.shared = new_shared; | ||||
| 		} | ||||
| #endif | ||||
| 
 | ||||
| 		r2 = KVM_MAGIC_FEAT_SR | KVM_MAGIC_FEAT_MAS0_TO_SPRG7; | ||||
| 
 | ||||
| 		r = EV_SUCCESS; | ||||
| @ -198,7 +217,6 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) | ||||
| 	case KVM_HCALL_TOKEN(KVM_HC_FEATURES): | ||||
| 		r = EV_SUCCESS; | ||||
| #if defined(CONFIG_PPC_BOOK3S) || defined(CONFIG_KVM_E500V2) | ||||
| 		/* XXX Missing magic page on 44x */ | ||||
| 		r2 |= (1 << KVM_FEATURE_MAGIC_PAGE); | ||||
| #endif | ||||
| 
 | ||||
| @ -254,13 +272,16 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| 	enum emulation_result er; | ||||
| 	int r; | ||||
| 
 | ||||
| 	er = kvmppc_emulate_instruction(run, vcpu); | ||||
| 	er = kvmppc_emulate_loadstore(vcpu); | ||||
| 	switch (er) { | ||||
| 	case EMULATE_DONE: | ||||
| 		/* Future optimization: only reload non-volatiles if they were
 | ||||
| 		 * actually modified. */ | ||||
| 		r = RESUME_GUEST_NV; | ||||
| 		break; | ||||
| 	case EMULATE_AGAIN: | ||||
| 		r = RESUME_GUEST; | ||||
| 		break; | ||||
| 	case EMULATE_DO_MMIO: | ||||
| 		run->exit_reason = KVM_EXIT_MMIO; | ||||
| 		/* We must reload nonvolatiles because "update" load/store
 | ||||
| @ -270,11 +291,15 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| 		r = RESUME_HOST_NV; | ||||
| 		break; | ||||
| 	case EMULATE_FAIL: | ||||
| 	{ | ||||
| 		u32 last_inst; | ||||
| 
 | ||||
| 		kvmppc_get_last_inst(vcpu, false, &last_inst); | ||||
| 		/* XXX Deliver Program interrupt to guest. */ | ||||
| 		printk(KERN_EMERG "%s: emulation failed (%08x)\n", __func__, | ||||
| 		       kvmppc_get_last_inst(vcpu)); | ||||
| 		pr_emerg("%s: emulation failed (%08x)\n", __func__, last_inst); | ||||
| 		r = RESUME_HOST; | ||||
| 		break; | ||||
| 	} | ||||
| 	default: | ||||
| 		WARN_ON(1); | ||||
| 		r = RESUME_GUEST; | ||||
| @ -284,6 +309,81 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu) | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_emulate_mmio); | ||||
| 
 | ||||
| int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 	      bool data) | ||||
| { | ||||
| 	ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK; | ||||
| 	struct kvmppc_pte pte; | ||||
| 	int r; | ||||
| 
 | ||||
| 	vcpu->stat.st++; | ||||
| 
 | ||||
| 	r = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST, | ||||
| 			 XLATE_WRITE, &pte); | ||||
| 	if (r < 0) | ||||
| 		return r; | ||||
| 
 | ||||
| 	*eaddr = pte.raddr; | ||||
| 
 | ||||
| 	if (!pte.may_write) | ||||
| 		return -EPERM; | ||||
| 
 | ||||
| 	/* Magic page override */ | ||||
| 	if (kvmppc_supports_magic_page(vcpu) && mp_pa && | ||||
| 	    ((pte.raddr & KVM_PAM & PAGE_MASK) == mp_pa) && | ||||
| 	    !(kvmppc_get_msr(vcpu) & MSR_PR)) { | ||||
| 		void *magic = vcpu->arch.shared; | ||||
| 		magic += pte.eaddr & 0xfff; | ||||
| 		memcpy(magic, ptr, size); | ||||
| 		return EMULATE_DONE; | ||||
| 	} | ||||
| 
 | ||||
| 	if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size)) | ||||
| 		return EMULATE_DO_MMIO; | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_st); | ||||
| 
 | ||||
| int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, | ||||
| 		      bool data) | ||||
| { | ||||
| 	ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK; | ||||
| 	struct kvmppc_pte pte; | ||||
| 	int rc; | ||||
| 
 | ||||
| 	vcpu->stat.ld++; | ||||
| 
 | ||||
| 	rc = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST, | ||||
| 			  XLATE_READ, &pte); | ||||
| 	if (rc) | ||||
| 		return rc; | ||||
| 
 | ||||
| 	*eaddr = pte.raddr; | ||||
| 
 | ||||
| 	if (!pte.may_read) | ||||
| 		return -EPERM; | ||||
| 
 | ||||
| 	if (!data && !pte.may_execute) | ||||
| 		return -ENOEXEC; | ||||
| 
 | ||||
| 	/* Magic page override */ | ||||
| 	if (kvmppc_supports_magic_page(vcpu) && mp_pa && | ||||
| 	    ((pte.raddr & KVM_PAM & PAGE_MASK) == mp_pa) && | ||||
| 	    !(kvmppc_get_msr(vcpu) & MSR_PR)) { | ||||
| 		void *magic = vcpu->arch.shared; | ||||
| 		magic += pte.eaddr & 0xfff; | ||||
| 		memcpy(ptr, magic, size); | ||||
| 		return EMULATE_DONE; | ||||
| 	} | ||||
| 
 | ||||
| 	if (kvm_read_guest(vcpu->kvm, pte.raddr, ptr, size)) | ||||
| 		return EMULATE_DO_MMIO; | ||||
| 
 | ||||
| 	return EMULATE_DONE; | ||||
| } | ||||
| EXPORT_SYMBOL_GPL(kvmppc_ld); | ||||
| 
 | ||||
| int kvm_arch_hardware_enable(void *garbage) | ||||
| { | ||||
| 	return 0; | ||||
| @ -366,14 +466,20 @@ void kvm_arch_sync_events(struct kvm *kvm) | ||||
| { | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 	int r; | ||||
| 	/* FIXME!!
 | ||||
| 	 * Should some of this be vm ioctl ? is it possible now ? | ||||
| 	 */ | ||||
| 	/* Assume we're using HV mode when the HV module is loaded */ | ||||
| 	int hv_enabled = kvmppc_hv_ops ? 1 : 0; | ||||
| 
 | ||||
| 	if (kvm) { | ||||
| 		/*
 | ||||
| 		 * Hooray - we know which VM type we're running on. Depend on | ||||
| 		 * that rather than the guess above. | ||||
| 		 */ | ||||
| 		hv_enabled = is_kvmppc_hv_enabled(kvm); | ||||
| 	} | ||||
| 
 | ||||
| 	switch (ext) { | ||||
| #ifdef CONFIG_BOOKE | ||||
| 	case KVM_CAP_PPC_BOOKE_SREGS: | ||||
| @ -387,6 +493,7 @@ int kvm_dev_ioctl_check_extension(long ext) | ||||
| 	case KVM_CAP_PPC_UNSET_IRQ: | ||||
| 	case KVM_CAP_PPC_IRQ_LEVEL: | ||||
| 	case KVM_CAP_ENABLE_CAP: | ||||
| 	case KVM_CAP_ENABLE_CAP_VM: | ||||
| 	case KVM_CAP_ONE_REG: | ||||
| 	case KVM_CAP_IOEVENTFD: | ||||
| 	case KVM_CAP_DEVICE_CTRL: | ||||
| @ -417,6 +524,7 @@ int kvm_dev_ioctl_check_extension(long ext) | ||||
| 	case KVM_CAP_PPC_ALLOC_HTAB: | ||||
| 	case KVM_CAP_PPC_RTAS: | ||||
| 	case KVM_CAP_PPC_FIXUP_HCALL: | ||||
| 	case KVM_CAP_PPC_ENABLE_HCALL: | ||||
| #ifdef CONFIG_KVM_XICS | ||||
| 	case KVM_CAP_IRQ_XICS: | ||||
| #endif | ||||
| @ -635,12 +743,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) | ||||
| #endif | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu, | ||||
|                                      struct kvm_run *run) | ||||
| { | ||||
| 	kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, run->dcr.data); | ||||
| } | ||||
| 
 | ||||
| static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, | ||||
|                                       struct kvm_run *run) | ||||
| { | ||||
| @ -837,10 +939,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) | ||||
| 		if (!vcpu->mmio_is_write) | ||||
| 			kvmppc_complete_mmio_load(vcpu, run); | ||||
| 		vcpu->mmio_needed = 0; | ||||
| 	} else if (vcpu->arch.dcr_needed) { | ||||
| 		if (!vcpu->arch.dcr_is_write) | ||||
| 			kvmppc_complete_dcr_load(vcpu, run); | ||||
| 		vcpu->arch.dcr_needed = 0; | ||||
| 	} else if (vcpu->arch.osi_needed) { | ||||
| 		u64 *gprs = run->osi.gprs; | ||||
| 		int i; | ||||
| @ -1099,6 +1197,42 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event, | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| 
 | ||||
| static int kvm_vm_ioctl_enable_cap(struct kvm *kvm, | ||||
| 				   struct kvm_enable_cap *cap) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
| 	if (cap->flags) | ||||
| 		return -EINVAL; | ||||
| 
 | ||||
| 	switch (cap->cap) { | ||||
| #ifdef CONFIG_KVM_BOOK3S_64_HANDLER | ||||
| 	case KVM_CAP_PPC_ENABLE_HCALL: { | ||||
| 		unsigned long hcall = cap->args[0]; | ||||
| 
 | ||||
| 		r = -EINVAL; | ||||
| 		if (hcall > MAX_HCALL_OPCODE || (hcall & 3) || | ||||
| 		    cap->args[1] > 1) | ||||
| 			break; | ||||
| 		if (!kvmppc_book3s_hcall_implemented(kvm, hcall)) | ||||
| 			break; | ||||
| 		if (cap->args[1]) | ||||
| 			set_bit(hcall / 4, kvm->arch.enabled_hcalls); | ||||
| 		else | ||||
| 			clear_bit(hcall / 4, kvm->arch.enabled_hcalls); | ||||
| 		r = 0; | ||||
| 		break; | ||||
| 	} | ||||
| #endif | ||||
| 	default: | ||||
| 		r = -EINVAL; | ||||
| 		break; | ||||
| 	} | ||||
| 
 | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| long kvm_arch_vm_ioctl(struct file *filp, | ||||
|                        unsigned int ioctl, unsigned long arg) | ||||
| { | ||||
| @ -1118,6 +1252,15 @@ long kvm_arch_vm_ioctl(struct file *filp, | ||||
| 
 | ||||
| 		break; | ||||
| 	} | ||||
| 	case KVM_ENABLE_CAP: | ||||
| 	{ | ||||
| 		struct kvm_enable_cap cap; | ||||
| 		r = -EFAULT; | ||||
| 		if (copy_from_user(&cap, argp, sizeof(cap))) | ||||
| 			goto out; | ||||
| 		r = kvm_vm_ioctl_enable_cap(kvm, &cap); | ||||
| 		break; | ||||
| 	} | ||||
| #ifdef CONFIG_PPC_BOOK3S_64 | ||||
| 	case KVM_CREATE_SPAPR_TCE: { | ||||
| 		struct kvm_create_spapr_tce create_tce; | ||||
|  | ||||
| @ -110,7 +110,6 @@ void kvmppc_update_timing_stats(struct kvm_vcpu *vcpu) | ||||
| 
 | ||||
| static const char *kvm_exit_names[__NUMBER_OF_KVM_EXIT_TYPES] = { | ||||
| 	[MMIO_EXITS] =              "MMIO", | ||||
| 	[DCR_EXITS] =               "DCR", | ||||
| 	[SIGNAL_EXITS] =            "SIGNAL", | ||||
| 	[ITLB_REAL_MISS_EXITS] =    "ITLBREAL", | ||||
| 	[ITLB_VIRT_MISS_EXITS] =    "ITLBVIRT", | ||||
|  | ||||
| @ -63,9 +63,6 @@ static inline void kvmppc_account_exit_stat(struct kvm_vcpu *vcpu, int type) | ||||
| 	case EMULATED_INST_EXITS: | ||||
| 		vcpu->stat.emulated_inst_exits++; | ||||
| 		break; | ||||
| 	case DCR_EXITS: | ||||
| 		vcpu->stat.dcr_exits++; | ||||
| 		break; | ||||
| 	case DSI_EXITS: | ||||
| 		vcpu->stat.dsi_exits++; | ||||
| 		break; | ||||
|  | ||||
| @ -291,6 +291,26 @@ TRACE_EVENT(kvm_unmap_hva, | ||||
| 	TP_printk("unmap hva 0x%lx\n", __entry->hva) | ||||
| ); | ||||
| 
 | ||||
| TRACE_EVENT(kvm_ppc_instr, | ||||
| 	TP_PROTO(unsigned int inst, unsigned long _pc, unsigned int emulate), | ||||
| 	TP_ARGS(inst, _pc, emulate), | ||||
| 
 | ||||
| 	TP_STRUCT__entry( | ||||
| 		__field(	unsigned int,	inst		) | ||||
| 		__field(	unsigned long,	pc		) | ||||
| 		__field(	unsigned int,	emulate		) | ||||
| 	), | ||||
| 
 | ||||
| 	TP_fast_assign( | ||||
| 		__entry->inst		= inst; | ||||
| 		__entry->pc		= _pc; | ||||
| 		__entry->emulate	= emulate; | ||||
| 	), | ||||
| 
 | ||||
| 	TP_printk("inst %u pc 0x%lx emulate %u\n", | ||||
| 		  __entry->inst, __entry->pc, __entry->emulate) | ||||
| ); | ||||
| 
 | ||||
| #endif /* _TRACE_KVM_H */ | ||||
| 
 | ||||
| /* This part must be outside protection */ | ||||
|  | ||||
| @ -146,7 +146,7 @@ long kvm_arch_dev_ioctl(struct file *filp, | ||||
| 	return -EINVAL; | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
|  | ||||
| @ -2656,7 +2656,7 @@ out: | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext) | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) | ||||
| { | ||||
| 	int r; | ||||
| 
 | ||||
|  | ||||
| @ -602,7 +602,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp, | ||||
| 			 unsigned int ioctl, unsigned long arg); | ||||
| int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf); | ||||
| 
 | ||||
| int kvm_dev_ioctl_check_extension(long ext); | ||||
| int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext); | ||||
| 
 | ||||
| int kvm_get_dirty_log(struct kvm *kvm, | ||||
| 			struct kvm_dirty_log *log, int *is_dirty); | ||||
|  | ||||
| @ -162,7 +162,7 @@ struct kvm_pit_config { | ||||
| #define KVM_EXIT_TPR_ACCESS       12 | ||||
| #define KVM_EXIT_S390_SIEIC       13 | ||||
| #define KVM_EXIT_S390_RESET       14 | ||||
| #define KVM_EXIT_DCR              15 | ||||
| #define KVM_EXIT_DCR              15 /* deprecated */ | ||||
| #define KVM_EXIT_NMI              16 | ||||
| #define KVM_EXIT_INTERNAL_ERROR   17 | ||||
| #define KVM_EXIT_OSI              18 | ||||
| @ -268,7 +268,7 @@ struct kvm_run { | ||||
| 			__u64 trans_exc_code; | ||||
| 			__u32 pgm_code; | ||||
| 		} s390_ucontrol; | ||||
| 		/* KVM_EXIT_DCR */ | ||||
| 		/* KVM_EXIT_DCR (deprecated) */ | ||||
| 		struct { | ||||
| 			__u32 dcrn; | ||||
| 			__u32 data; | ||||
| @ -763,6 +763,8 @@ struct kvm_ppc_smmu_info { | ||||
| #define KVM_CAP_VM_ATTRIBUTES 101 | ||||
| #define KVM_CAP_ARM_PSCI_0_2 102 | ||||
| #define KVM_CAP_PPC_FIXUP_HCALL 103 | ||||
| #define KVM_CAP_PPC_ENABLE_HCALL 104 | ||||
| #define KVM_CAP_CHECK_EXTENSION_VM 105 | ||||
| 
 | ||||
| #ifdef KVM_CAP_IRQ_ROUTING | ||||
| 
 | ||||
|  | ||||
| @ -2324,6 +2324,34 @@ static int kvm_ioctl_create_device(struct kvm *kvm, | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) | ||||
| { | ||||
| 	switch (arg) { | ||||
| 	case KVM_CAP_USER_MEMORY: | ||||
| 	case KVM_CAP_DESTROY_MEMORY_REGION_WORKS: | ||||
| 	case KVM_CAP_JOIN_MEMORY_REGIONS_WORKS: | ||||
| #ifdef CONFIG_KVM_APIC_ARCHITECTURE | ||||
| 	case KVM_CAP_SET_BOOT_CPU_ID: | ||||
| #endif | ||||
| 	case KVM_CAP_INTERNAL_ERROR_DATA: | ||||
| #ifdef CONFIG_HAVE_KVM_MSI | ||||
| 	case KVM_CAP_SIGNAL_MSI: | ||||
| #endif | ||||
| #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING | ||||
| 	case KVM_CAP_IRQFD_RESAMPLE: | ||||
| #endif | ||||
| 	case KVM_CAP_CHECK_EXTENSION_VM: | ||||
| 		return 1; | ||||
| #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING | ||||
| 	case KVM_CAP_IRQ_ROUTING: | ||||
| 		return KVM_MAX_IRQ_ROUTES; | ||||
| #endif | ||||
| 	default: | ||||
| 		break; | ||||
| 	} | ||||
| 	return kvm_vm_ioctl_check_extension(kvm, arg); | ||||
| } | ||||
| 
 | ||||
| static long kvm_vm_ioctl(struct file *filp, | ||||
| 			   unsigned int ioctl, unsigned long arg) | ||||
| { | ||||
| @ -2487,6 +2515,9 @@ static long kvm_vm_ioctl(struct file *filp, | ||||
| 		r = 0; | ||||
| 		break; | ||||
| 	} | ||||
| 	case KVM_CHECK_EXTENSION: | ||||
| 		r = kvm_vm_ioctl_check_extension_generic(kvm, arg); | ||||
| 		break; | ||||
| 	default: | ||||
| 		r = kvm_arch_vm_ioctl(filp, ioctl, arg); | ||||
| 		if (r == -ENOTTY) | ||||
| @ -2571,33 +2602,6 @@ static int kvm_dev_ioctl_create_vm(unsigned long type) | ||||
| 	return r; | ||||
| } | ||||
| 
 | ||||
| static long kvm_dev_ioctl_check_extension_generic(long arg) | ||||
| { | ||||
| 	switch (arg) { | ||||
| 	case KVM_CAP_USER_MEMORY: | ||||
| 	case KVM_CAP_DESTROY_MEMORY_REGION_WORKS: | ||||
| 	case KVM_CAP_JOIN_MEMORY_REGIONS_WORKS: | ||||
| #ifdef CONFIG_KVM_APIC_ARCHITECTURE | ||||
| 	case KVM_CAP_SET_BOOT_CPU_ID: | ||||
| #endif | ||||
| 	case KVM_CAP_INTERNAL_ERROR_DATA: | ||||
| #ifdef CONFIG_HAVE_KVM_MSI | ||||
| 	case KVM_CAP_SIGNAL_MSI: | ||||
| #endif | ||||
| #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING | ||||
| 	case KVM_CAP_IRQFD_RESAMPLE: | ||||
| #endif | ||||
| 		return 1; | ||||
| #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING | ||||
| 	case KVM_CAP_IRQ_ROUTING: | ||||
| 		return KVM_MAX_IRQ_ROUTES; | ||||
| #endif | ||||
| 	default: | ||||
| 		break; | ||||
| 	} | ||||
| 	return kvm_dev_ioctl_check_extension(arg); | ||||
| } | ||||
| 
 | ||||
| static long kvm_dev_ioctl(struct file *filp, | ||||
| 			  unsigned int ioctl, unsigned long arg) | ||||
| { | ||||
| @ -2614,7 +2618,7 @@ static long kvm_dev_ioctl(struct file *filp, | ||||
| 		r = kvm_dev_ioctl_create_vm(arg); | ||||
| 		break; | ||||
| 	case KVM_CHECK_EXTENSION: | ||||
| 		r = kvm_dev_ioctl_check_extension_generic(arg); | ||||
| 		r = kvm_vm_ioctl_check_extension_generic(NULL, arg); | ||||
| 		break; | ||||
| 	case KVM_GET_VCPU_MMAP_SIZE: | ||||
| 		r = -EINVAL; | ||||
|  | ||||
		Loading…
	
		Reference in New Issue
	
	Block a user